SDL_RenderCopyF uses ints

I’m using SDL_RenderCopyF in my game to draw a part of a large image file as the background image as the user pans and zooms around it. But, as SDL_RenderCopyF uses integers for the pixels’ positions (rather than floats) of the image to be copied from (const SDL_Rect * srcrect) I get stuttering with the images I then draw on top of it when zoomed in.

Would it be possible to add a new function like:

int SDL_RenderCopyFF(SDL_Renderer * renderer,
                    SDL_Texture * texture,
                    const SDL_FRect * srcrect,
                    const SDL_FRect * dstrect);

so all coordinates are then floats?

You could try specifying the entire source image and do the zoom/pan with the dest rect coords (they don’t have to be screen size or smaller, IIRC)

Yes, that’d solve the problem. But, I’d be worried that it might be less efficient to draw a big texture (I’m using a 4096x4096 background image) zoomed in and drawing the entire texture but most of it off the screen.

What I ended up doing was a bit of maths to take the floating point part of what would be lost by SDL only allowing ints in the source rectangle, and drawing the texture just off the top left of the screen by an equivalent amount. It meant also drawing a bit extra so I didn’t get a black line to the right and bottom of the screen, but it seems to work!

I’m not sure if SDL having only ints in the source rectangle was an oversight, or maybe some renderers (DirectX?) only allow that so SDL had to go for the lowest common denominator?

It’s probably more for better compatibility with non-power-of-two textures, which some APIs only support with non-normalized integer texture coordinates.

Also, each and every triangle that gets passed to the GPU is clipped to the screen before rasterization. So drawing a big texture with SDL_RenderCopyF() shouldn’t effect performance; the parts that are off screen aren’t rasterized.

Instead of starting a new thread, I want to add that this highlights a weird problem imo

Context: I am making my own 2D game engine and using that engine to make an editor for the engines data (that makes sense right?). My engine supports scaling objects and this is where src float precision is critical.

TL;DR I solved my element scaling rendering issue by changing the src rect to SDL_FRect and doing minor refactors inside the SDL functions and highly recommend allowing this.

I have a core component called a WindowElement that can position and size itself either by anchors or positions.

Part of this hierarchy is a feature that allows me to mark a window element as a mask. This is super useful for making a sub-window that can scroll for example.

The issue came with my Texture editor with the goal to be able to specify the dimensions of the texture to use for sprites. So for example:

<Sprite Name=“Default” xPos=“0” yPos=“0” width=“128” height=“128”/>
<Sprite Name=“MoveDividerCursor” xPos=“0” yPos=“7” width=“23” height=“19”/>

the default sprite is created for every texture and represents a using the full texture.

No here’s where stuff gets funky XD

selecting sprite rects with pixel accuracy is not practical without zooming. So I added zooming. Which means my engine supports the idea of scaling an image by a %
My RenderableWindowElement sub class automatically calculates the src and dst rects and in this case applies simple logic when an element is scaled to grab the correct src rect. However, when scrolling, I would be moving the dst rect by sub pixels (which is intended behavior) or the src texture.

Imagine within my masked area, with a texture being zoomed 8x, my scroll is 2 pixels from the top left.
In reality I am stretching the src texture over a bigger destination area. Applying float to the src (it’s one of the advantages of UVs being floats for example). It straightforwardly tells the renderer hey, 6 pixels are black in this stretch instead of forcing that 8 pixel conversion.

Anyhoo, the change was relatively easy to do and the results were super helpful. Honestly, if you look at the SDL_RenderCopyF implementation:

In the case of not using rendergeometry (Surface blitting) its as simple as injecting the float values converted to int.
In the geometry case, the src rect is actually lost past this function as its used to generated the uv’s and xy’s which are floats. It actually removes the need to float casting the ints of the src rect and the renderers automatically incorporated.

Disclaimer: My solution only works when rendering using geometry and SDL_Surface rendering to SDL_Surface won’t work but I can honestly how no issue with that so far.

1 Like

And what is SDL_RenderCopyF? Google doesn’t show me anything…

Can you provide a patch or pull request for this change? That seems pretty useful and something we could include in SDL 3.0.

Unfortunately, I am working on a copy of SDL2 and am not set up (or have any experience) with Github (perforce user for too many years). For reference, I have my own personal local perforce server setup.

The change is straightforward overall, but I noticed SDL_RenderCopy is gone in SDL3, and unless I’m mistaken, its been rebranded as SDL_RenderTexture, so I highlighted the gist of the change there, but since I’m not setup to compile or test this change, I didn’t go further.

Note the thing I am a bit shakier on is that if you are NOT using render geometry, I create an SDL_Rect that will int cast (lose precision) the SDL_FRect passed. If I understood correctly, this is mainly used when doing surface-to-surface rendering on the processor instead of GPU (from the digging I did in SDL2) where using the advantage of a float-based src rect would be MUCH more work. Since I don’t need such precision in that case, I didn’t go deeper…

In case your curious how this works out in practicality, this is my WIP texture editor built on my engine that is built off roughly 500 window elements of which 373 are renderable (some are disabled and thus not visible).