Graphic artifacts when using RENDER_SCALE_QUALITY

I noticed something odd when using RENDER_SCALE_QUALITY of linear or best.
There are certain artifacts that appear on some images and around rendered text.

If you check the screens included here you can see what I mean:

This image uses the best option:

In this image you can see at the top of the cloud box there is some extra lines that are not in the actual image.

This image also uses the best option:

In this image you can clearly see the corners of the smaller boxes have lines extending off them. Also and the biggest flaw is in the background.
A giant line runs across the image. The background is made of three images (top, middle, and bottom) as the middle portion is an animation that plays. Right where the seams are the line separates the two.

Also there are extra lines around text, for example see the blue line above “large”

Check this image for what it looks like using nearest at the same window size:

It seems that some of the image might be clipping and rendered at the top instead. Like it went below its bounds and wrapped around to the top. If you notice the line in the text large seems to be line right up with the bottom of the “G” that’s what seems to be happening where all of the text anomaly’s are.

I can use nearest neighbor option when rendering on PC and that’s fine. But for android I’d like to use best because it looks much better on smaller screens, but when its on a larger screen like a tablet you can clearly see these artifacts.

There is some more artifacts going on that I can only see on tablets where in the corners of rounded images (like that cloud box) there is garbage pixels data and just random colors there. I wasn’t able to get a screen shot of this yet.

Is it possible this has to due with the image type I am saving as or maybe another option I am missing?

Also, as a bonus… When trying this app out on a kindle some of the text is mirrored!
Yeah, some, not all of it is… strange… And only on a kindle. I have tested my app on PC, an android phone, and a kindle. The text is fine on the PC and android phone.

Hmm yeah now that I think about it more the image is wrapping at its edges. You can see the lines on the top of the cloud box are from the bottom of the image, but were rendered above it instead.

Hiyya, looks like a mismatch of data exception handled differently on
different platforms. Could I see some code that loads and displays the
cloud texture and the font? When I get something weird like that I find
that my code was good but with a little clarification here or there it
starts popping. I’m glad to see you going with png files. I’m in love with
png, ttf, ascii and also namespaces, filestreams, stringstreams, bitflags,
inheritance, and polymorphism.

~“In hindsight, I realized I could see into the future. Which is kind of
like having
premonitions of flashbacks.” — Steven Wright ~On Thu, Jan 23, 2014 at 11:58 AM, ronkrepps wrote:

I noticed something odd when using RENDER_SCALE_QUALITY of linear or
best.
There are certain artifacts that appear on some images and around rendered
text.

If you check the screens included here you can see what I mean:

This image uses the best option:

http://i.imgur.com/UCLLJpO.png

In this image you can see at the top of the cloud box there is some extra
lines that are not in the actual image.

This image also uses the best option:

http://i.imgur.com/jAUVsvf.png

In this image you can clearly see the corners of the smaller boxes have
lines extending off them. Also and the biggest flaw is in the background.
A giant line runs across the image. The background is made of three images
(top, middle, and bottom) as the middle portion is an animation that plays.
Right where the seams are the line separates the two.

Also there are extra lines around text, for example see the blue line
above “large”

Check this image for what it looks like using nearest at the same window
size:

http://i.imgur.com/YlNOHJC.png

It seems that some of the image might be clipping and rendered at the top
instead. Like it went below its bounds and wrapped around to the top. If
you notice the line in the text large seems to be line right up with the
bottom of the “G” that’s what seems to be happening where all of the text
anomaly’s are.

I can use nearest neighbor option when rendering on PC and that’s fine.
But for android I’d like to use best because it looks much better on
smaller screens, but when its on a larger screen like a tablet you can
clearly see these artifacts.

There is some more artifacts going on that I can only see on tablets where
in the corners of rounded images (like that cloud box) there is garbage
pixels data and just random colors there. I wasn’t able to get a screen
shot of this yet.

Is it possible this has to due with the image type I am saving as or maybe
another option I am missing?

Also, as a bonus… When trying this app out on a kindle some of the text
is mirrored!
Yeah, some, not all of it is… strange… And only on a kindle. I have
tested my app on PC, an android phone, and a kindle. The text is fine on
the PC and android phone.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

R Manard wrote:

Hiyya, looks like a mismatch of data exception handled differently on different platforms. Could I see some code that loads and displays the cloud texture and the font? When I get something weird like that I find that my code was good but with a little clarification here or there it starts popping. I’m glad to see you going with png files. I’m in love with png, ttf, ascii and also namespaces, filestreams, stringstreams, bitflags, inheritance, and polymorphism.

To load fonts:

Code:

    TTF_Font* f = LoadFont("SourceSansPro-Regular.ttf", 18);
if(f == nullptr)
	return false;
mTextManager->AddFont("Text", f);

f = LoadFont("SourceSansPro-Regular.ttf", 18);
if(f == nullptr)
	return false;
TTF_SetFontOutline(f, 1);
mTextManager->AddFont("TextOutline", f);

Fonts are added to the text manager which is called every frame to render each text object to the screen:

Code:

void TextManager::Render(int group)
{
std::vector< Text* >::iterator it;
SDL_Rect d;

for (it = mTextObjects.begin(); it != mTextObjects.end(); it++)
{
	Text* t = (*it);

	if(group == -1 || t->GetGroup() == group )
	{
		//rerender the texture if it has changed
		if(!t->GetOutline())
			t->Render(mRenderer, mFonts.find(t->GetFontName())->second, NULL);
		else
			t->Render(mRenderer, mFonts.find(t->GetFontName())->second, mFonts.find(t->GetOutlineFontName())->second);

		//render the texture to the screen
		SDL_QueryTexture(t->GetTexture(), NULL, NULL, &d.w, &d.h);

		switch (t->GetHorizontalJust())
		{
			case HJ_CENTER:
				t->SetX((mScreenWidth / 2.0f) - (d.w / 2.0f));
				break;

			case HJ_LEFT:
				t->SetX(0);
				break;

			case HJ_RIGHT:
				t->SetX(mScreenWidth - d.w);
				break;

			default:
				d.x = static_cast<int>(t->GetX());
				break;
		}

		switch (t->GetVerticalJust())
		{
			case VJ_CENTER:
				t->SetY((mScreenHeight / 2.0f) - (d.h / 2.0f));
				break;

			case VJ_TOP:
				t->SetY(0);
				break;

			case VJ_BOTTOM:
				t->SetY(mScreenHeight - d.h);
				break;

			default:
				break;
		}
		
		if (t->GetHorizontalAlignment() == HA_LEFT)
			d.x = static_cast<int>(t->GetX());

		else if (t->GetHorizontalAlignment() == HA_CENTER)
			d.x = static_cast<int>(t->GetX() - (d.w / 2.0f));

		else if (t->GetHorizontalAlignment() == HA_RIGHT)
			d.x = static_cast<int>(t->GetX() - d.w);

		d.y = static_cast<int>(t->GetY());

		//Change the alpha if it is fading
		if(t->GetState() == TS_FADE)
			SDL_SetTextureAlphaMod(t->GetTexture(), static_cast<int>((t->GetFadeTimer() / t->GetFadeDuration()) * 255));

		SDL_RenderCopy(mRenderer, t->GetTexture(), NULL, &d);
	}
}

}

Each text object’s stored texture is rendered when the object is created or changed:

Code:

void Text::Render(SDL_Renderer* renderer, TTF_Font* font, TTF_Font* outlineFont)
{
if (mDirty)
{
SDL_DestroyTexture(mTexture);

	if(!mOutline)
		mTexture = RenderText(mText, mColor, font, renderer);

	else
		mTexture = RenderOutlinedText(mText, mColor, mOutlineColor, font, outlineFont, renderer);

	mDirty = false;
}

}

SDL_Texture* RenderText(const String& message, SDL_Color color, TTF_Font font, SDL_Renderer renderer)
{
//Render the message to an SDL_Surface and create a texture to return
SDL_Surface *surface = nullptr;
surface = TTF_RenderText_Blended(font, message.c_str(), color);
//surface = TTF_RenderText_Solid(font, message.c_str(), color);
SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, surface);

//Clean up unneeded stuff
SDL_FreeSurface(surface);

return texture;

}

SDL_Texture* RenderOutlinedText(const String& message, SDL_Color color, SDL_Color outlineColor, TTF_Font *font, TTF_Font outlineFont, SDL_Renderer renderer)
{
//Render the message to an SDL_Surface and create a texture to return
SDL_Surface *bgSurface = TTF_RenderText_Blended(outlineFont, message.c_str(), outlineColor);
SDL_Surface *fgSurface = TTF_RenderText_Blended(font, message.c_str(), color);

SDL_Rect r = {TTF_GetFontOutline(outlineFont), TTF_GetFontOutline(outlineFont), fgSurface->w, fgSurface->h};

/* blit text onto its outline */
SDL_SetSurfaceBlendMode(fgSurface, SDL_BLENDMODE_BLEND);
SDL_BlitSurface(fgSurface, NULL, bgSurface, &r);

SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, bgSurface);

//Clean up unneeded stuff
SDL_FreeSurface(fgSurface); 
SDL_FreeSurface(bgSurface);

return texture;

}

Sorry I thought you had an issue also with font. So the Image texture is
the thing in issue. That’s the relevant code. I’m seeing so far code that
decides on the fly what the dimensions and placements are. Try absolute
values for testing. Like maybe a white square with a single pixel width of
red border. If you print out the data at each juncture the area of
corruption is more easily found also like:

#include

#include

#include

ofstream fout(“diag_main.txt”); fout << “main >accessed” << endl;

fout << “stuff value:” << stuff << endl; fout.close();

or maybe on the screen like:

stringstream myString;

string stringtime = “”;

then later you

myString.str("");//most awesome way to clear a stringstream

myString << textcycles;//int to stringstream

stringtime = “program cycles every hundreth of a second :” +
myString.str();//stringstream
to string

drawText(stringtime.c_str(), font15, current.w / 2, 500, 0);On Thu, Jan 23, 2014 at 12:00 PM, ronkrepps wrote:

Hmm yeah now that I think about it more the image is wrapping at its
edges. You can see the lines on the top of the cloud box are from the
bottom of the image, but were rendered above it instead.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

These are actually symptoms of some pretty common OpenGL issues. I haven’t
looked into what the GL renderer does for texture env parameters, but
perhaps the wrapping could be prevented with something like this:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );

Jonny D

It seems it was wrapping around because the first line of pixels of the cloud image was transparent. I removed the transparent line and you can clearly see what is probably the underlying issue here… comparing this image to the first you can see that when I use best it is adding a border to some of the images.

The clouds have a dark border (which is actually not in the original image)

Very strange!

R Manard wrote:

Sorry I thought you had an issue also with font. So the Image texture is the thing in issue. That’s the relevant code. I’m seeing so far code that decides on the fly what the dimensions and placements are. Try absolute values for testing. Like maybe?a white square with a single pixel width of red border. If you print out the data at each juncture the area of corruption is more easily found also like:
?

I’m not sure what I’m attempting to figure out here with your example code. Could you explain?

Textures have data that they are composed of. I was illustrated how you
could create a text file with this data in it and then put the data in it
again at each juncture. Anytime you move the data or change it in anyway
you could send it to the file to compare. After that I was showing that you
could put it on the screen. Its just general information about
troubleshooting . There are many other ways to go about it .On Jan 23, 2014 3:34 PM, “ronkrepps” wrote:

R Manard wrote:

Sorry I thought you had an issue also with font. So the Image texture is
the thing in issue. That’s the relevant code. I’m seeing so far code that
decides on the fly what the dimensions and placements are. Try absolute
values for testing. Like maybe a white square with a single pixel width of
red border. If you print out the data at each juncture the area of
corruption is more easily found also like:

I’m not sure what I’m attempting to figure out here with your example
code. Could you explain?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

This is what I think, which could be wrong because I’ve been away from 3D-things for quite a long time. Feel free to correct me (or even bash me :slight_smile: ). I don’t have time to prove my hypothesis yet.

Nearest neighborhood technique takes the value from a pixel that is nearest to the target pixel position. The results is quite predictable. Texture filtering in the other hand takes samples/pixels/texels (whatever) from an area around the target pixel to calculate the result pixel. The difference between algorithms is the pattern of the area that pixels are taken from.

When the target pixel is on the edge of the texture, it also take an off-image pixel into the calculation. This off-image pixel might be (0,0,0,0) or even pixels from the other side of the image. It could be anything, based on the OpenGL context I guess. Anyway, if the off-image pixel value is (0,0,0,0), the pixel around the edge would be darken, results in some weird-looking dark color border. Usually these pixels would have lower alpha value, so it should be blended into the background (and does not look as much ugly).

Also if the texture contains transparent area, the pixel closed to that area might be also darken (or even lighten), because usually the artist paint the transparent area black.

So I think, how the filtering works should be taken into consideration when creating the texture, if the texture filtering is used. You might want to paint the transparent area the same color as the edge of opaque area (with alpha = 0). So when the filter works, it uses better color into the algorithm, and results in better image. Also you might want to add 1 pixel-wide transparent border around the picture to prevent off-image sampling.

Again, this might be totally wrong. Try experimenting. And if I’m wrong, I’m sorry for wasting your time!

Here is more details on texture filtering, from DirectX web site. You may find it interesting.

http://msdn.microsoft.com/en-us/library/windows/desktop/bb206250(v=vs.85).aspx

I agree. This most likely the reason for the dark lines.

Solutions:
(a) Use RENDER_SCALE_QUALITY of 0.
(b) When storing images, create a border around them of 2 pixels. The
border is created by taking the last row, column of pixels and duplicating
them
© Render your scene into a render texture without any scaling. Then you
can scale the render texture and copy it to screen. This is usually the
best option, if there are no technical issues preventing you from doing
this.On Fri, Jan 24, 2014 at 10:45 PM, mr_tawan <mr_tawan at hotmail.com> wrote:

This is what I think, which could be wrong because I’ve been away from
3D-things for quite a long time. Feel free to correct me (or even bash me [image:
Smile] ). I don’t have time to prove my hypothesis yet.

Nearest neighborhood technique takes the value from a pixel that is
nearest to the target pixel position. The results is quite predictable.
Texture filtering in the other hand takes samples/pixels/texels (whatever)
from an area around the target pixel to calculate the result pixel. The
difference between algorithms is the pattern of the area that pixels are
taken from.

When the target pixel is on the edge of the texture, it also take an
off-image pixel into the calculation. This off-image pixel might be
(0,0,0,0) or even pixels from the other side of the image. It could be
anything, based on the OpenGL context I guess. Anyway, if the off-image
pixel value is (0,0,0,0), the pixel around the edge would be darken,
results in some weird-looking dark color border. Usually these pixels would
have lower alpha value, so it should be blended into the background (and
does not look as much ugly).

Also if the texture contains transparent area, the pixel closed to that
area might be also darken (or even lighten), because usually the artist
paint the transparent area black.

So I think, how the filtering works should be taken into consideration
when creating the texture, if the texture filtering is used. You might want
to paint the transparent area the same color as the edge of opaque area
(with alpha = 0). So when the filter works, it uses better color into the
algorithm, and results in better image. Also you might want to add 1
pixel-wide transparent border around the picture to prevent off-image
sampling.

Again, this might be totally wrong. Try experimenting. And if I’m wrong,
I’m sorry for wasting your time!


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Pallav Nawani
IronCode Gaming Private Limited
Website: http://www.ironcode.com
Twitter: http://twitter.com/Ironcode_Gaming
Facebook: http://www.facebook.com/Ironcode.Gaming
Mobile: 9997478768

Thank you for your suggestions I will be attempting to implement these over the next couple days and I’ll let you know what I’ve found out.

After some testing, you guys are absolutely right.
All of my base images have transparent borders around them.

When drawn with nearest neighbor scaling they look fine.
When drawn with linear or best the color (even though it’s alpha value is set to 0) is drawing no matter what around the border of everything.

I tested this by taking the image making the transparent area a magenta color and setting it’s alpha to 0. The image draws with a magenta border when scaled.

There’s something I don’t understand though. Is it common practice to do like you said and make sure all of the transparent areas are set to a similar color before setting the alpha value to 0? That seems like a lot of extra work that just doesn’t make sense.

I’m wondering why these scaling algorithms don’t take into account the fact that if it’s reached a border of its bounds to just simply not take any other colors into account when blending and just use the ones in the border?

This is what is happening in the TTF library as well, once you’ve created a text surface and made it into a texture. Anytime it’s scaling text - pixels on the extreme edge are wrapping over to the opposite sides.

Some examples: http://i.imgur.com/sF8qHON.png

It seems something would need to be changed in the TTF library to fix this. Has no one else encountered this before?

Den 27. jan. 2014 21:42, skrev ronkrepps:

When drawn with nearest neighbor scaling they look fine.
When drawn with linear or best the color (even though it’s alpha value
is set to 0) is drawing no matter what around the border of everything.

In short, this happens because the transparent pixels still technically
have a color even if you can’t see it, so the surrounding pixels are
blended with that color.

There’s something I don’t understand though. Is it common practice to do
like you said and make sure all of the transparent areas are set to a
similar color before setting the alpha value to 0? That seems like a lot
of extra work that just doesn’t make sense.

You can also solve it by using premultiplied alpha. When saving (or
loading) your textures, multiply the rgb values by the alpha value for
all the pixels (leave the alpha value as is). When drawing, instead of
"normal" alpha blending, use additive blending. That way, transparent
pixels literally contribute nothing when blended.

-g

You should use a premultiplied alpha blend mode instead of additive or ?normal? alpha blending. SDL_Render doesn?t support this directly though. In OpenGL it would be this:

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

whereas ?regular? post-multiplied alpha blending is this:

glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);On Jan 28, 2014, at 5:31 AM, Gerry JJ wrote:

When drawing, instead of “normal” alpha blending, use additive blending. That way, transparent pixels literally contribute nothing when blended.

-g


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Den 28. jan. 2014 10:36, skrev Alex Szpakowski:

You should use a premultiplied alpha blend mode instead of additive or ?normal? alpha blending. SDL_Render doesn?t support this directly though. In OpenGL it would be this:

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

Oh, I’m sorry! You’re right. For some reason I thought that’s what SDL’s
additive mode did =P

-g

There’s one thing I haven’t tried.

  1. Render text into a texture, with black color (0,0,0).
  2. Create another texture, slightly larger than the text texture. render the text texture into this.
  3. Render the target color into this texture with blend mode ‘SDL_BLENDMODE_ADD’. SDL_FillRect() should do the job.

May be it works, may be not.

Maybe we should add a function to set this?

2014-01-28, Alex Szpakowski :> You should use a premultiplied alpha blend mode instead of additive or

?normal? alpha blending. SDL_Render doesn?t support this directly though. In
OpenGL it would be this:

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

whereas ?regular? post-multiplied alpha blending is this:

glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE,
GL_ONE_MINUS_SRC_ALPHA);

On Jan 28, 2014, at 5:31 AM, Gerry JJ wrote:

When drawing, instead of “normal” alpha blending, use additive blending.
That way, transparent pixels literally contribute nothing when blended.

-g


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

A whole new function isn?t needed, just a new SDL_BlendMode enum (and implementations in the backends of SDL_Render.)

It should probably be pretty trivial to implement if anyone wants to take a stab at it - although it seems a bit late to add features for 2.0.2 specifically.On Jan 28, 2014, at 4:18 PM, Sik the hedgehog <sik.the.hedgehog at gmail.com> wrote:

Maybe we should add a function to set this?

2014-01-28, Alex Szpakowski <@Alex_Szpakowski>:

You should use a premultiplied alpha blend mode instead of additive or
?normal? alpha blending. SDL_Render doesn?t support this directly though. In
OpenGL it would be this:

glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

whereas ?regular? post-multiplied alpha blending is this:

glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE,
GL_ONE_MINUS_SRC_ALPHA);

On Jan 28, 2014, at 5:31 AM, Gerry JJ wrote:

When drawing, instead of “normal” alpha blending, use additive blending.
That way, transparent pixels literally contribute nothing when blended.

-g


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org