SDL_RenderCopyF uses ints

I’m using SDL_RenderCopyF in my game to draw a part of a large image file as the background image as the user pans and zooms around it. But, as SDL_RenderCopyF uses integers for the pixels’ positions (rather than floats) of the image to be copied from (const SDL_Rect * srcrect) I get stuttering with the images I then draw on top of it when zoomed in.

Would it be possible to add a new function like:

int SDL_RenderCopyFF(SDL_Renderer * renderer,
                    SDL_Texture * texture,
                    const SDL_FRect * srcrect,
                    const SDL_FRect * dstrect);

so all coordinates are then floats?

You could try specifying the entire source image and do the zoom/pan with the dest rect coords (they don’t have to be screen size or smaller, IIRC)

Yes, that’d solve the problem. But, I’d be worried that it might be less efficient to draw a big texture (I’m using a 4096x4096 background image) zoomed in and drawing the entire texture but most of it off the screen.

What I ended up doing was a bit of maths to take the floating point part of what would be lost by SDL only allowing ints in the source rectangle, and drawing the texture just off the top left of the screen by an equivalent amount. It meant also drawing a bit extra so I didn’t get a black line to the right and bottom of the screen, but it seems to work!

I’m not sure if SDL having only ints in the source rectangle was an oversight, or maybe some renderers (DirectX?) only allow that so SDL had to go for the lowest common denominator?

It’s probably more for better compatibility with non-power-of-two textures, which some APIs only support with non-normalized integer texture coordinates.

Also, each and every triangle that gets passed to the GPU is clipped to the screen before rasterization. So drawing a big texture with SDL_RenderCopyF() shouldn’t effect performance; the parts that are off screen aren’t rasterized.