Graphic artifacts when using RENDER_SCALE_QUALITY

Some additional info…

Here is what I tried and I’m sure I messed this up but…

Changed the SDL_BLENDMODE enum to the following:

Code:
typedef enum
{
SDL_BLENDMODE_NONE = 0x00000000, /< no blending
dstRGBA = srcRGBA */
SDL_BLENDMODE_BLEND = 0x00000001, /
< alpha blending
dstRGB = (srcRGB * srcA) + (dstRGB * (1-srcA))
dstA = srcA + (dstA * (1-srcA)) */
SDL_BLENDMODE_ADD = 0x00000002, /< additive blending
dstRGB = (srcRGB * srcA) + dstRGB
dstA = dstA */
SDL_BLENDMODE_MOD = 0x00000004, /
< color modulate
dstRGB = srcRGB * dstRGB
dstA = dstA */
SDL_BLENDMODE_PREMULTIPLIED = 0x00000008 /**< new test mode */

} SDL_BlendMode;

In SDL_render_gl.c changed the switch statement in GL_SetBlendMode() to the following:

Code:
switch (blendMode) {
case SDL_BLENDMODE_NONE:
data->glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
data->glDisable(GL_BLEND);
break;
case SDL_BLENDMODE_BLEND:
data->glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
data->glEnable(GL_BLEND);
data->glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
break;
case SDL_BLENDMODE_ADD:
data->glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
data->glEnable(GL_BLEND);
data->glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE, GL_ZERO, GL_ONE);
break;
case SDL_BLENDMODE_MOD:
data->glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
data->glEnable(GL_BLEND);
data->glBlendFuncSeparate(GL_ZERO, GL_SRC_COLOR, GL_ZERO, GL_ONE);
break;
case SDL_BLENDMODE_PREMULTIPLIED:
data->glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
data->glEnable(GL_BLEND);
data->glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
break;
}

Rebuilt the sdl2 library and using the newly rebuilt lib/dll I added the following code:

Code:

SDL_SetTextureBlendMode(texturefile, SDL_BLENDMODE_PREMULTIPLIED);

I also tried:

Code:
SDL_SetRenderDrawBlendMode(renderer, SDL_BLENDMODE_PREMULTIPLIED);

After trying these things: nothing changed

Here’s what I don’t understand. I created the following image:

[Image: http://i.imgur.com/2kwVot9.png ]

When I apply no SDL_BLENDMODE flag to the image or when I apply SDL_BLENDMODE_BLEND with the following:

Code:

SDL_SetTextureBlendMode(cloudbox, SDL_BLENDMODE_BLEND);

I get the bordered result: [Image: http://i.imgur.com/eferkhX.png ]

But when I apply the following flag:

Code:

SDL_SetTextureBlendMode(cloudbox, SDL_BLENDMODE_NONE);

I get the following result: [Image: http://i.imgur.com/uNeZcOf.png ]

It’s perfect… but how is it even working if the blendmode is set to SDL_BLENDMODE_BLEND how is it drawing the transparent areas correctly? And why is this one image not drawing the dark border?

[…]
[…picture with unwanted gray outline…]
[…]

This happens because the combination of blending and magnification
filtering causes the color of the transparent pixels to leak in
through the magnification filter.

To avoid this, you need to have your graphics application save the
color of the transparent pixels - even if they’re supposed to be
completely invisible. And of course, for that to work properly, you
need to make sure those pixels actually are the same color as the
pixels around the edge of the object. That is, if you remove the alpha
channel in the application, the transparent areas should become blue,
in this case.

Or, you can add some code to your loader that tries to fix this, but
that’s not entirely trivial to do correctly in every case.On Tue, Feb 4, 2014 at 11:44 PM, ronkrepps wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

David Olofson wrote:

This happens because the combination of blending and magnification
filtering causes the color of the transparent pixels to leak in
through the magnification filter.

To avoid this, you need to have your graphics application save the
color of the transparent pixels - even if they’re supposed to be
completely invisible. And of course, for that to work properly, you
need to make sure those pixels actually are the same color as the
pixels around the edge of the object. That is, if you remove the alpha
channel in the application, the transparent areas should become blue,
in this case.

Or, you can add some code to your loader that tries to fix this, but
that’s not entirely trivial to do correctly in every case.

Others have told me that I could edit the way I “load” the images and actually premultiply when loading them. This is what I’m trying to figure out. I am using SDL_Image to load the png’s but I don’t see any way to do it with that library. Instead I may have to use another library that has the premultiply option? I know there is some information about it in the libpng manual but not much. Surely someone has done this before?

I think pre-multiplyed alpha has nothing to do with scaling artifact. When scaling is performed, additional pixel has to be created using color value from surrounding pixels, and that causes the problem, not just the alpha function or alpha channel whatsoever.

mr_tawan wrote:

I think pre-multiplyed alpha has nothing to do with scaling artifact. When scaling is performed, additional pixel has to be created using color value from surrounding pixels, and that causes the problem, not just the alpha function or alpha channel whatsoever.

According to some of the info in this thread and a couple others I’ve read I have to do two things:

  1. Load the image with premultiplied alpha
    &

  2. Render with

     	glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
     	glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    

I’m not sure how to load the image with premultiplied alpha though.

[…]

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

I’m not sure how to load the image with premultiplied alpha though.

Premultiplied alpha is primarily a performance optimization, so you
can use the above instead of this:

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

However, it actually does eliminate this artifact as well, because
what you do is mask out the background (the GL_ONE_MINUS_SRC_ALPHA
part), and then apply the texture using additive blending - and adding
black means “no operation!” Most importantly here, this means that
interpolating from any color towards black fades towards “no
operation”, rather than towards black.

I don’t know if there is any feature to premultiply alpha when
loading. If not, you’ll have to process the pixels of the surface
yourself; load as 32 bit RGBA with a specific byte order, multiply R,
G and B with A.

You could probably use SDL to alpha blend the image over a black
background, but you still need the alpha channel for the
GL_ONE_MINUS_SRC_ALPHA part, so I don’t think you can avoid some pixel
level coding there.

As to the alternative; full alpha blending, I’m not sure PhotoShop and
other applications actually can do the right thing. (I’m seeing a
lot of complaints about this on the 'net.) In GIMP, you can check the
"Save color values from transparent pixels" when exporting to PNG, and
it’ll work just fine - provided the original image is correct,
obviously! Usually not a problem, as antialiasing from selections etc
is implemented on the alpha channel, but it’s easy to get it wrong
with pixel art, where you basically use alpha as a 1 bit channel. If
you remove the alpha channel in GIMP, you’ll see the color information
in those transparent pixels. I’m not sure what PhotoShop does, but
apparently some applications automatically set all zero alpha pixels
to white or black.On Wed, Feb 5, 2014 at 5:21 AM, ronkrepps wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

It?s likely not any kind of meaningful performance optimization at all, on desktop hardware at least.

Premultiplied alpha is generally used for reasons other than performance. Blending correctness (e.g. getting rid of ?fringes?), better image compression, and advanced blending techniques (going between additive and alpha blending without changing blend modes) are a few reasons.


http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]
http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx
http://blogs.msdn.com/b/shawnhar/archive/2010/04/09/how-shawn-learned-to-stop-worrying-and-love-premultiplied-alpha.aspx
http://blogs.msdn.com/b/shawnhar/archive/2009/11/07/premultiplied-alpha-and-image-composition.aspxOn Feb 5, 2014, at 4:47 AM, David Olofson wrote:

On Wed, Feb 5, 2014 at 5:21 AM, ronkrepps wrote:
[…]

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

I’m not sure how to load the image with premultiplied alpha though.

Premultiplied alpha is primarily a performance optimization,


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

[…]

True, that aspect is probably completely irrelevant to most of us
these days. It used to matter when you could still find machines and
devices without 3D accelerators. :wink:

I don’t think there is such a thing as a not meaningful performance
optimization in an AAA engine though, and I don’t think there ever
will be. Hardware gets faster, but people want more out of it - and
now they’re selling 4K displays… o.OOn Wed, Feb 5, 2014 at 9:54 AM, Alex Szpakowski wrote:

It’s likely not any kind of meaningful performance optimization at all, on
desktop hardware at least.

//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

I guess I just need to understand this on a lower level.
Is it not fixable without using just some SDL code or must it be modified.

I kind of understand what you are saying but I just thought there wouldn’t be an issue doing it with just SDL code.

I guess I just need to understand this on a lower level.
Is it not fixable without using just some SDL code or must it be modified.

Depends on the tools you use. I’m not sure what you’re using, but I
know some applications can’t get it right. What you need to do it
without post processing tools/code is PNGs with correct color
information in the transparent pixels. Then you should be able to use
"full" alpha blending without getting these artifacts.

If all else fails, you could use GIMP to fix it manually; import the
image, convert alpha to selection, fill with the color of the edge
pixels (provided it’s a single color all the way around - or you’re in
for some manual work), delete selected area (this is where you get
transparent pixels with the correct color), save as PNG with color
information in transparent pixels.

I kind of understand what you are saying but I just thought there wouldn’t
be an issue doing it with just SDL code.

One would think so, but these things aren’t as trivial as they may
seem at first, and many of the tools we use weren’t really designed
for game development and the like in the first place, so there are
almost always glitches at various places in the tool chain…On Wed, Feb 5, 2014 at 7:00 PM, ronkrepps wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

I like to “green screen” all of my PNG source stuff as you can see in my
code. Also, I don’t trust what comes from the file as far as propagation so
I also redundantly set the green screen . I like this as A best practice
because the green shows up nicely when there are problems and the edges of
my black outlined images always turn out better . So this is me, sharing a
best practice

where exactly is RENDER_SCALE_QUALITY defined in SDL documentation?
Can’t find it.

http://wiki.libsdl.org/CategoryHints#HintsOn Sun, Feb 9, 2014 at 5:17 PM, mattbentley wrote:

where exactly is RENDER_SCALE_QUALITY defined in SDL documentation?
Can’t find it.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

The original alpha edge problem noted in this thread still exists in my test today on a Mac and Linux.

I see this as a fundamental mistake in the way alpha blending is handled.

To me, the various comments about a hack to ‘set the color of transparent edge pixels’, should be disregarded as an absurd workaround.

How, exactly are we supposed to scale a game to fullscreen without nonsense hacks? Doesn’t this mistake in SDL break basic use cases across the board where core quality is concerned?

If a person doesn’t use SDL, and is writing code more directly with the gpu in a modern approach, then compositing one alpha image on top of another and scaling it to fullscreen with a textured quad, is a trivial matter and could be considered a very typical situation that I do all the time.

Why then, is this fundamental component implemented incorrectly, and will anything actually be done about it in this community? Does this community take core problems like this seriously?

I also noticed in another somewhat relevant thread, how Mason appeared to be fighting warped cultural ideologies, and unable to convince the people with rational dialog:


The threads read to me this:
It is not that the culture is merely stupid, unthinking, or limited in attentional effort or time, but instead it sounds like the ideologies are so warped and specific, that basic wrong notions are elaborated with enormous detail and clarity, so that the excuses for why things are wrong today, become fortified and indisputable. Thus, irrational things continue to exist. Now, someone please tell me, how such situations can ever be overcome? Certainly this type of mental phenomena plagues the developer culture at large and not just SDL.

Surely, even my post may be as fruitless as Mason’s attempt at shining light on the absurdities going on in the culture, but how can we move on from this era, unless these incorrect mindsets and misplaced ideologies are confronted head on as irrational and done away with? As long as the cultural mindsets continue like this, how can we expect developer communities to ever meet and sustain higher standards of quality and re-establish strong foundations? Even if a new fork were established, if people from this same cultural tribe of mindsets became dominant participants, then it would merely end up back in the same current mess.

Perhaps, very few people even understand what the real foundational issues are here, how they came about, and why the cultural trends sustain that type of result, but if anything, I would like to see people speak out more like Mason, and if they have no other choice, to gather in a separate community that holds higher standards for essential elementary things, compared to what appears to be the majority population of today.

Every time, I try to keep an open mind and sample a public library like this, I run into these same type of problems, and can only make these type of wide spanning degrading conclusions about the culture, and resort to going back to doing everything myself at the lowest level possible.

If we are ever to work together collectively and achieve the fundamental standards of quality of this kind, I can only imagine that the current majority group would either have to change their beliefs to Mason’s, or be prevented from participating in our minority culture. Because, as it is right now, when both cultures try to exist in the same space, the warped majority culture is overriding the minority of higher standards and destroying it completely.

If anyone thinks this set of conclusions is wrong, then please tell me how to perform this basic scaling in SDL without nonsense hacks, and I will sincerely apologize for not reading and testing thoroughly enough.

Hi
There is no problem in the SDL.
This is a known problem that everyone faces, from indie developers to AAA studios like Rockstar.
Use premultiplied alpha if you want to avoid problems.
And if you’re interested, read http://www.adriancourreges.com/blog/2017/05/09/beware-of-transparent-pixels/

1 Like

But, can you point to the SDL functions to control it?

I had spent the entire day searching only to find people from a number of years ago, either facing this problem and proposing absurd hacks like this old thread, or saying that SDL does not expose control of this. (That one would need to drop into OpenGL/ Metal/Vulkan/DirectX directly)

If you accept that it is such a common and fundamental thing, and are not adopting a cognitive bias on the topic, don’t you think it is strange that the documentation and forum history make it seem like SDL does not account for this? Shouldn’t such a thing be implemented and then clearly documented?

A search from as recent as 2018 returns your very own thread rmg.nik, where you failed to find built-in control of this in SDL, and instead manually altered every pixel on the CPU (dare I not derail the topic further to state that your chosen solution is not even on the GPU and is considerably slow enough to matter)

I hope you can understand and differentiate what I’m saying are the real issues here with the way this has been handled by the SDL development community.

There are several ways to do this.

  1. Use texture packer with padding and pixel bleeding (e.g. https://www.codeandweb.com/texturepacker, https://github.com/wo1fsea/CppTexturePacker etc).
  2. Use pre-multiplied alpha and texture packer to export texture with pre-multiplied alpha.
  3. Pre-multiply the alpha when loading the texture into memory. This increases startup times, but you don’t need packers at all.
    Code sample
SDL_Surface* sfc = IMG_Load("image_filename.ext");
if (sfc->format->BytesPerPixel == 4)
{
    Uint8* pixels = (Uint8*)sfc->pixels;
    for (int y = 0; y < sfc->h; ++y)
    {
        for (int x = 0; x < sfc->w; ++x)
        {
            int index = y * m_surface->pitch + x * m_surface->format->BytesPerPixel;
            SDL_Color* target_pixel = (SDL_Color*)&pixels[index];
            target_pixel->r = (Uint8)(255.0 * (target_pixel->r / 255.0) * (target_pixel->a / 255.0));
            target_pixel->g = (Uint8)(255.0 * (target_pixel->g / 255.0) * (target_pixel->a / 255.0));
            target_pixel->b = (Uint8)(255.0 * (target_pixel->b / 255.0) * (target_pixel->a / 255.0));
        }
    }
}
SDL_Texture* tex = SDL_CreateTextureFromSurface(renderer, sfc);
SDL_FreeSurface(sfc);

SDL_BlendMode pma_blend = SDL_ComposeCustomBlendMode(
    SDL_BLENDFACTOR_ONE, SDL_BLENDFACTOR_ONE_MINUS_SRC_ALPHA, SDL_BLENDOPERATION_ADD,
    SDL_BLENDFACTOR_ONE, SDL_BLENDFACTOR_ONE_MINUS_SRC_ALPHA, SDL_BLENDOPERATION_ADD
);

SDL_SetTextureBlendMode(tex, pma_blend);
1 Like

Suggestion 1 is the same comical hack I already mentioned should be disregarded as a sincere approach.
Suggestion 2 is avoiding confronting the statement at hand that ‘SDL does not implement this adequately’
Suggestion 3 is inefficient on multiple basis of both processing time and extra allocations of memory - it is merely an attempt to hack SDL to do this from the outside, which is why it suffers from these set backs which are more than minor in their side effects.

All 3 of these are inadequate answers to avoid confronting the issues at hand, and seeing the flaws in the ideologies that led SDL to this point and keep it stuck here.

It does, since the introduction of SDL_ComposeCustomBlendMode() in SDL 2.0.6 (which is necessary to support textures with pre-multiplied alpha). It may be that your searches went back so far that this function had not yet ben added, when it could have reasonably been argued that SDL failed to provide support for this particular issue.

As for why it doesn’t get mentioned much I think that’s only because it’s very rarely of practical significance. I don’t bother to use pre-multiplied alpha because I don’t think I’ve ever seen this effect in practice, but if I do I know how to fix it in SDL2.

I do not understand why you think it should be the responsibility of SDL to “confront” this issue. The described behavior is fundamental to how all the graphics backends work when scaling textures with non-opaque pixels, SDL is simply a wrapper (‘abstraction layer’) to provide a consistent cross-platform interface. So long as SDL exposes the means to workaround it, its job is done.

( Hi! A newbie here, I hope there is no problem if I answer a 7 year old thread! )

I thought those antialiasing issues were natural and to be expected, but found out that they only appeared when I exported to PNG my art from Clip Studio Paint or Adobe Photoshop.
The reason: The transparent pixels cointain color data.

  • Photoshop: RGBA(255,255,255,0)
  • Clip Studio: RGBA(n,n,n,0) (where n corresponds the surrounding pixels)

Exporting the graphics with GIMP and unchecking the option “Save color values from transparent pixels” seems to have solved it :slight_smile:

Furthermore, this ImageMagick command does the same thing, so it can be easily automatized:
convert.exe 1.png -fill “rgba(0,0,0,0)” -opaque “rgba(255,255,255,0)” png32:1.png

This does not solve how SDL2 (or libpng, or OpenGL, I am not sure) behaves, I do not have enough knowledge to solve it in the code. But removing the color from the fully-transparent pixels solves this problem for me :slight_smile:

Updated: It seems that I am not actually solving it. It helps a bit, but does not solve the issue as some artifacts can still appear over bright backgrounds.

Update2: This option in TexturePacker is solving the issue, but I would rather use an FOSS solution, so people will not require a privative tool to use an open sourced engine

Update3: (Please excuse all the recent notifications.) There is an FOSS solution :slight_smile: !
The method is “alpha bleeding”: