SDL2 sprite batching confusion

I created a texture atlas (1024 x 1024) to crop my sprites from. The sprites were drawn with 3840 x 2160 as the intended target resolution. Currently, rendering 500 sprites from that atlas at 1920 x 1080 gives me 270-280 fps.
I’m not sure if that is fast or not, but the performance is the same as loading every image as an individual texture and rendering that.
Toggling SDL_HINT_RENDER_BATCHING and recompiling still gives me the same fps.
Calling SDL_RenderFlush() after every SDL_RenderCopy() still gives me the same fps.
I’m using SDL version 2.0.10. I was hoping for an performance boost by using texture atlases but maybe I’m missing something?

The savings with batching come from reduction in draw call overhead, and reduction in state changes (changing textures) that could lead to pipeline stalls (which depends on the graphics API and how the application manages textures, i.e. does it use bindless textures or does it call glBindTexture() every time).

Modern desktop GPUs, coupled with modern desktop CPUs, will laugh at being asked to draw 1000 textured triangles, even if every texture is separate and every triangle is a separate draw call.

@icculus posted a link back when he was first working on render batching to a sample app that rendered 20k sprites. While on desktop there was a noticeable speed improvement, on mobile devices it went from unusable without batching to no problem at all with batching.

Thanks for the reply, If that is the case then I might look for others ways to optimize.

You should still leave it turned on (it’s the default now). And using a texture atlas can have other benefits, like lower memory usage.

Okay, so after trying to see the performance with my dedicated graphics disabled, I still got the same performance and realized that my program was running on intel graphics this whole time. Rendering 20000 sprites went from 17 fps to 131 fps.

That’s pretty strange. What kind of decrease in image quality?

It looks a bit blurry, while rendering with the intel graphics looks like the real PNG picture. Maybe the problem is somewhere in the nvidia control panel, the difference is so small, but still I would like it to look like the real PNG.

Are you rendering at whole integer coordinates?

No, I’m using float coordinates

Intel graphics :
good
Nvidia graphics :
bad

But I mean, are they whole pixels (i.e. 1.0, 2.0, etc.) or are they fractional (1.2, 3.5, etc.)? Because if they don’t fall on whole pixel coords then you wind up with a situation where a pixel in the texture gets blended across multiple screen pixels and looks blurry. It might just be that Intel’s filtering is different.

Everything else is the same? Same screen resolution etc? I only ask because the two screenshots are different sizes.

Everything is the same, just switching the GPU from the nvidia control panel causes the problem.

Do you have a full frame vs full frame comparison? Or maybe try 10k or 5k sprites and see if it’s some weird optimization Nvidia is doing once it reaches a certain number of polys.

Honestly, though, I have absolutely no idea why it’s blurrier. If you’re rendering at the same coordinates at the same resolution, they should be pretty much identical unless theres some Nvidia graphics settings or “optimizations” interfering.

What do you mean by full frame vs full frame ? The whole screen?
This is a screenshot from a scene with ~470 entities
Intel graphics :
good
Nvidia Graphics :
bad

This must be a Nvidia thing, cause I’m not really doing anything else :neutral_face:.

Yeah, I have no idea. The only other thing that comes to mind is that maybe Nvidia is rendering at a lower resolution and then upscaling. What does SDL_GetRendererOutputSize() say for each of them?

Otherwise I have absolutely no idea. Sorry!

1 Like

Switching render drivers from opengl to direct3d removes the blurriness, as for the output size, both give me the same results.
So Intel Graphics with opengl and direct3d has good quality, but Nvidia Graphics is only good with direct3d?

It sounds like your nvidia control panel settings probably have some override enabled for a post-process antialiasing effect in opengl apps.

1 Like

Still no good, I tried the “Let the 3D application decide” option and it didn’t work, and most of my settings are application-controlled.

Okay so turning on “Antialiasing-FXAA” causes the problem, I didn’t read the “Turn off if you notice artifacts or dithering around the edges of objects”, thanks for the help guys!