Overhead converting SDL_Surface to SDL_Texture, for an image I want to display once.

Long story short, I’m having performance issues, mostly related to some rendering stuff I’m doing using a third party library. Which I’m also looking to improve or replace, but that’s a different topic.

I’m wondering if I can improve performance by foregoing the conversion from SDL_Surface to SDL_Texture. I would imagine the tradeoff looks something like this:

Convert:
(plus) Renders to window way faster (every frame)
(minus) Overhead in conversion (once)
(minus) Overhead moving to VRAM (once)

For most stuff, it’s a no brainer, since you’ll generally want to render many more times than once. But considering I’m altering the image every frame, is the overhead worth considering?

Thanks!

How are you altering the image? You need to describe what you are wanting to do.

You can do many cheap modifications with the renderer bled modes, colormods, or rendering to a target texture. If you must do direct pixel modification thats to going to go well.

Well, if you insist.

I’m re-rasterizing every frame, which of course SDL has no support for.

I hear moving to OpenGL might be an option, but it would be a pretty drastic move when I really only want to do this kind of zoom a handful of times and the solution I have works well for all my other cases.

Worst case, I would either try to scale using SDL’s built-in texture rendering functions (just putting a multiplier on some of the coordinates; it might not look as bad as raster scaling on pixel art) or, failing that, drop the idea entirely.

My insistence is because its not really possible to guide you without knowing what you are doing.

re-rasterizing isn’t going to work. I assumed thats what you were doing, I was asking why. What exactly are you doing? compositing images? drawing polygons? colorizing? blending? Why do you have to rasterize yourself instead of compositing with SDL_Renderer?

It’s possible SDL_UpdateTexture or SDL_LockTexture/SDL_UnLockTexture and SDL_TEXTUREACCESS_STREAMING would suffice too if you are simply trying to play video.

SDL_Renderer can already scale so that is’t a problem either.

Incorporating some direct calls to OpenGL in an otherwise SDL2 application isn’t at all “drastic”, SDL2 specifically supports this kind of ‘mixed’ arrangement. Obviously some care is needed, especially if you are using render batching, but if it would solve your problem it’s definitely worth investigating.

Well, we’ll see. It’s certainly unconventional, though I’ve gotten it to work pretty well for lots of cases. I’ve just found an upper bound on the size of the finished texture, that I’m looking for ways to raise.

When you say compositing, what exactly would I be working with? Drawing rectangles?

I’ve never heard this before. Everything I’ve seen suggests OpenGL rendering requires you to initialize with flags that make all our familiar blits and etc impossible. This is great news! Can you point me to some documentation?

I can’t be more specific without knowing in detail what you want to do, but in general SDL_GL_CreateContext() will return a context which you can use to make direct OpenGL calls, and SDL_GL_BindTexture() will bind an existing SDL texture to that context.

You can often even use the renderer context if you use SDL_RenderFlush. Depends on what you wish to do. I mix the renderer with OpenGL very easily and successfully.

When you say compositing, what exactly would I be working with?

I mean just compositing multiple textures to the screen. SDL will happily do this including scaling, rotation, and flipping.

After doing some more research, even OpenGL implementations of SVG rasterization are kind of not good in terms of time efficiency. Which makes sense; it has to parse text, which is a pretty not-parallelizable process, and the standard includes non-polygon shapes like curves and stuff, so it’s basically exactly the wrong kind of thing to use a GPU for.

So I guess I’ll look into pre-rendering. My guess is this is going to take way too much memory (especially on large resolution windows), but who knows, maybe I’ll be surprised.

All this to avoid that nasty pixely look from real time sprite scaling, haha. But hey, if it were easy, it wouldn’t be as fun.

So, here’s what I landed on.

I define a maximum rasterization scale for each vector I instantiate (or it defaults to -1.0, which means rasterize normally), and at rasterization time, if the overall scale I’m asking it to draw at is above that maximum scale, it decreases it until it fits, rasterizes, then scales it back up at draw time. And as an especially nice feature, it takes into account the size of the monitor/window automatically.

For example, if I made the vector for a 720p resolution, and defined the the maximum rasterization scale to 4.0 and I ask it to render at 2.0 scale on a 4k monitor, it calculates the overall scale at 2160/720 * 2.0 (=6.0), then it will render at 3.0 scale and then draw at 2x scale so overall it comes out to the right scale.

This does look a little fuzzy, but has way better performance and definitely beats that nasty pixelization look that comes from on-the-fly sprite scaling.

As an interesting by-product, I messed up in the process and accidentally stumbled upon a really sweet effect I’m totally going to reuse where it renders a really blurry, pixely image that slowly clarifies over time while remaining the same size. Or you could do it in reverse.