Downscaling quality with SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "best");

I’m drawing a PNG of a zombie in my game and when the zombies are at a long distance away they are drawn at a much smaller pixel size than the PNG is itself. I’m using SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, “best”); (before I create any textures!) but the quality seems quite poor to me. Is this just the expected quality for downscaling, even when using “best” or am I getting something wrong?

Here is the original image (1024x1024), what it looks like when it’s drawn on the screen with SDL (64x64), and what Gimp makes it look like when it downscales it (64x64), and is the sort of quality I was hoping for:

BTW this is on Windows using SDL 2.0.14 and SDL2_image 2.0.5

It looks like the image is being drawn with the nearest neighbor filter for some reason. I wouldn’t expect the quality to be as smooth as the gimp example, but not as bad as what you’re getting either. I’d try checking if the scale mode is set correctly with SDL_GetTextureScaleMode.

The wiki says “best” is only supported by Direct3D, that may have something to do with the issue. Looking at the source code, the OpenGL and software renderers seem to fallback to “linear” if set to anything other than “nearest”.

Yes, I was thinking it looked like nearest neighbour. I’ve done an SDL_GetTextureScaleMode on the texture as you suggested and it says it’s SDL_ScaleModeBest (2) so that looks good. All I do then is call an SDL_RenderCopyF with that texture.

The app is running on Windows 10 so I’d have guessed it’d be using Direct3D as I don’t tell SDL to use anything specifically? (I couldn’t find an SDL function to query what method it’s using, does one exist?).

You can get the renderer you are using with SDL_GetRendererInfo and list all available renderers with SDL_GetNumRenderDrivers + SDL_GetRenderDriverInfo.

However, I’ve put together a simple test program and used the image you provided, and still got the same results, using Direct3d, with the image being shown as if scaled with nearest neighbor filter. It seems that downscaling such a big image gives these results even using the correct filters, probably due to the lack of mipmaps, which are unavailable in SDL afaik.

Only suggestion that comes to mind now is using different sized textures for the different distances, and switching between them before drawing as needed. Sorry for pointing you to the wrong direction before.

Maybe set the render quality to “linear” and see if that makes a difference?

It’s all down to the aperture of the interpolation filter. To get anywhere near to the quality of the Gimp example, the filter must have an aperture at least comparable with one pixel in the final image. Since you are scaling from 1024 to 64 pixels, that implies a filter aperture of 16 pixels or more; the filter you are using in practice (“linear”) has an aperture of 2 pixels!

So to get as good a result you would need to implement your own scaling code, not use GPU scaling. However there is a partial workaround, Rather than scaling in one step, do it in a series of intermediate steps. For example if you scale from 1024 to 512, then from 512 to 256, then from 256 to 128 and finally from 128 to 64 (all using “linear”) this is what the end result looks like:

zombi64x64

Yeah, this is part of the reason 3D games use mipmaps. The GPU is doing bilinear texture filtering, but it only samples neighboring texels. Because the “next” pixel in the final rendering winds up being farther apart than that, you get the effect you’re seeing.

Like @rtrussell said, since SDL doesn’t provide any way to use mipmaps, you’ll have to create some pre-scaled images and then choose one based on the on-screen size of your zombie.

Thanks everyone and if there is no better texture downscaling available for my SDL games I’ll go ahead and pre-downscale my images with Gimp and then get my code to automatically choose and scale the closest version of each image in my ‘UXTexture’ class I’ve written.

For the iOS version of my games though I still use a load of OpenGL I cobbled together years back (with no SDL) and use that with my UXTexture class. I’m no expert on OpenGL and graphics rendering so I don’t know why but that doesn’t seem to give the same problems that SDL has:

I’ve tried forcing SDL on Windows to use OpenGL but that seemed to give similar results as the default DirectX did.

TBH, just use OpenGL everywhere.

What you’re doing requires mipmaps to look good, and what you’re planning to do to work around the issue is implementing mipmaps manually.
It looks better with OpenGL because (presumably) it automatically create mipmaps when you upload the textures and uses them when rendering smaller versions of your image.
(I forgot how exactly it works with OpenGL, maybe you need to specify explicitly that it creates mimaps for a created texture, maybe you did that, but it should be only one gl-call anyway)

Doing this with OpenGL should be both much easier and faster than implementing something like mipmapping manually (and might even look better if it interpolates between mipmaps?)

But the OP said “I’ve tried forcing SDL on Windows to use OpenGL but that seemed to give similar results as the default DirectX did” and that’s my experience too: OpenGL is no better than Direct3D in respect of scaling.

I’m not even convinced, from the example posted, that iOS is doing any better either: the pixels just look slightly ‘blurred’. That could simply be because it’s a retina display and there’s an extra stage of scaling from SDL’s output to the screen.

I’ve not experimented with glGenerateMipmap(), which I presume is what you are referring to in respect of “automatic” mipmap creation. I’m not sure how easy it would be to integrate that with the SDL rendering pipeline.

Thanks rtrussell, you’re absolutely right! I tried running the SDL code on Android (so definitely using OpenGL) and it gave the same results as SDL on Windows (I guess using DirectX), and then I tried running my own OpenGL code (so no SDL) on an iPad 4 which doesn’t have a retina screen - and got the same results as SDL does!

So looks like you’re right, everybody is doing a bad job of downscaling images but iOS retina screens must have an extra stage which blurs the image down.

It may not make much difference to most images but my zombie images have white pixels in and when the image gets downscaled they stand out really really badly. I’ve gone with the suggestion from above and manually created 1/2, 1/4, 1/8 etc… size images and my code now automatically chooses the best one for the size that’s being drawn.

2 Likes

In your custom OpenGL code, try adding the following lines after the call to glTexImage2D() (or similar, what you’re using to upload the texture data):

	glGenerateMipmap(GL_TEXTURE_2D);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

This is for OpenGL 3+ and OpenGL ES2.0+.
For older OpenGL versions you can instead set glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, 1); before the glTexImage2D() call.

And to be clear, I was only talking about custom OpenGL code, not SDL Render with OpenGL.