SDL scaled render visual bugs

I encountered a rendering bug when using SDL_RenderCopy to render part of the texture.
When using SDL_RenderSetLogicalSize different from the actual size of the displayed area, and if SDL_RENDER_SOFTWARE is used or SDL_RENDER_ACCELERATED is used with SDL_HINT_RENDER_SCALE_QUALITY = "1" or better, the displayed area shows lines formed from neighboring pixels of the texture part.
The issue doesn’t appear with SDL_RENDER_ACCELERATED and SDL_HINT_RENDER_SCALE_QUALITY = "0", but this emits “ugly” rendering when rendering complicated graphics (e.g. vertical lines).

I currently have 3 workarounds for this:

  1. Use a texture buffer, in which the desired part of the texture is copied first, and then this buffer is rendered.
  2. Transfer each desired part of the texture into a separate texture.
  3. Split each texture part with 1px transparent lines.

My usecase is sprites, but I don’t really like these three ways, I’m interested to hear what other people think.

Minimal example:

#include <SDL.h>
#include <assert.h>



#define SUBSYSTEMS SDL_INIT_VIDEO | SDL_INIT_EVENTS | SDL_INIT_TIMER

#define WINDOW_FLAGS 0

#ifdef RENDERER_SOFTWARE
#	define RENDERER_FLAGS SDL_RENDERER_SOFTWARE
#else
#	define RENDERER_FLAGS SDL_RENDERER_ACCELERATED
#	define RENDER_SCALE_QUALITY "1"
#endif

#define WINODW_W 512
#define WINODW_H 512
#define RENDER_W 768
#define RENDER_H 768
#define OBJECT_SIZE 64
#define COLS (RENDER_W / OBJECT_SIZE)
#define ROWS (RENDER_H / OBJECT_SIZE)

int main(int argc, char **argv) {
	(void)argc;
	(void)argv;

	int rc;
	(void)rc;

	rc = SDL_InitSubSystem(SUBSYSTEMS);
	assert(rc == 0);

#ifndef RENDERER_SOFTWARE
	SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, RENDER_SCALE_QUALITY);
#endif

	SDL_Window *window = SDL_CreateWindow("test", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, WINODW_W, WINODW_H, WINDOW_FLAGS);
	assert(window != NULL);
	SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, RENDERER_FLAGS);
	assert(renderer != NULL);

	rc = SDL_RenderSetLogicalSize(renderer, RENDER_W, RENDER_H);
	assert(rc == 0);

	SDL_Surface *spritesheet_surface = SDL_LoadBMP("spritesheet.bmp");
	assert(spritesheet_surface != NULL);
	SDL_Texture *spritesheet = SDL_CreateTextureFromSurface(renderer, spritesheet_surface);
	assert(spritesheet != NULL);
	SDL_FreeSurface(spritesheet_surface);

	SDL_Rect src;
	SDL_Rect dst;

	src.x = 0;
	src.y = 0;
	src.w = OBJECT_SIZE;
	src.h = OBJECT_SIZE;
	dst.w = OBJECT_SIZE;
	dst.h = OBJECT_SIZE;

	for (int x = 0; x < COLS; x++) {
		for (int y = 0; y < ROWS; y++) {
			dst.x = x * OBJECT_SIZE;
			dst.y = y * OBJECT_SIZE;

			rc = SDL_RenderCopy(renderer, spritesheet, &src, &dst);
			assert(rc == 0);
		}
	}

	SDL_RenderPresent(renderer);

	SDL_Event event;
	int exit = 0;

	do {
		while (SDL_PollEvent(&event)) {
			if (event.type == SDL_QUIT) {
				exit = 1;
			}
		}

		SDL_Delay(200); // Give the CPU a break.
	} while (!exit);

	SDL_DestroyRenderer(renderer);
	SDL_DestroyWindow(window);
	SDL_QuitSubSystem(SUBSYSTEMS);

	return 0;
}

I can’t upload the spritesheet to forum directly as new user, so here it is zstd-compressed and base64-encoded:

KLUv/QSIBQIAggMJEeAtBwBQEOmScrB/2VR/7Z0Cd8OETPnf95Fx+0Wwrs7+xk4JCACF3uDagQYo
sNNRmAEEOw6IAcwCV5XdMEOuaYQ=

Screenshot of both cases:

If you want to use texture filtering (SDL_HINT_RENDER_SCALE_QUALITY = 1 or better) then you’re gonna have to put a 1 pixel empty border around each sprite on your sprite sheet.

The reason is because the GPU will blend the pixels from the texture together.

Yes, I’ve mentioned that, but thanks for the reply.
I’m curious about the way other people choose, so your way is to use border lines.
I am also inclined to this method, it is the easiest and least expensive.

The only way, if you want bilinear texture filtering, is either a border (be it empty or a repeat of the sprite’s edge pixels), or have every sprite be a separate texture.

1 Like

Note that the (invisible) colour of fully transparent pixels can still affect the colour between nearby pixels.

You probably still want at least 1px margin of transparency to avoid razor sharp edges.

Hello,

I believe I share a similar problem. I have been crawling around for a week looking for a solution. I am making sprite based game, my sprites are drawn in isometric perspective (2:1, width to length) all come from a single atlas and I blit them to the screen for each sprite.

The problem is when my logical resolution is different from my screen resolution and I enable interpolation (setting SDL_HINT_RENDER_SCALE_QUALITY to anything other than nearest), I see black lines that define the edge of each isometric tile.

I am trying to understand how could I apply the idea of adding a border to the sprites’ edge pixels in case? My issue is that the sprite is a 64x32 transparent rectangle on which only the center “diamond” is illustrated with art. That way when I stack them together I get the illusion of isometric perspective.

Here is what I mean:
Screenshot from 2023-05-16 10-57-55

So it’s blending the “art” pixels in the texture with the empty surrounding pixels. One solution would be to repeat the edge of the art area for at least 1 pixel, extending it outward. So when it’s blending them, it’s sampling the repeat pixels instead of empty ones.

The tricky part is you’d need to keep the alpha channel the current size, but a lot of image editing software (including Photoshop IIRC) clears any pixels that are fully transparent when saving images.

1 Like

Hi,

Thanks for the reply. I ended up using a different solution hinted at in a separate thread - I render the entire scene to a separate texture that is sized at the logical resolution then I switch the renderer target and render that texture to the screen at the screen resolution. This also has a added bonus, I realized, of allowing me to render the UI separately from the world scene. So I can keep the UI unaffected by resolution scaling by rendering as yet another texture on top of the scene texture and then alpha blending it on top.

1 Like