SDL2 Stuttering

Hi there folks

I am working on a little project whereby I’d like to build a 2D isometric game using SDL2, and am encountering a subtle stuttering issue.

The entirety of the code can be found on GitHub at GitHub - jmarvin90/isometric-game: A two-dimensional, isometric projection game project inspired by the 2D Game Engine with C++ course on Pikuma.com

In short, moving sprites occasionally stutter instead of moving smoothly.

I think most likely this issue is being caused by how I’ve implemented my time step. I suppose it’s probably something to do with:

  • The constants I’m using to represent things such as the target FPS/milliseconds per frame
  • The Game class member I’m using to store the millisecond number for the previous frame
  • The game update logic I’m using to get the “current” millisecond number, calculate a delta time, update the game state, and “delay”
  • The render logic - including a suspect static_cast - I’m using to get the destination rect to which sprites should be rendered

Relevant links (github.com/):

  • Constants: jmarvin90/isometric-game/blob/main/src/engine/constants.h
  • Game class - millisec val member: jmarvin90/isometric-game/blob/main/src/engine/game.h#L21
  • Game loop update logic: jmarvin90/isometric-game/blob/main/src/engine/game.cpp#L148
  • Render logic: jmarvin90/isometric-game/blob/main/src/engine/systems/render.h#L12

I’ve tried a couple of things, including performing smarter rounding (in place of static casts) for getting the destination render rect, using SDL_FRect instead of SDL_Rect, lowering the target frame rate, and re-ordering the update function - to no avail

Is anyone able to help to diagnose the nature of the stutter?

Any help is greatly appreciated!

One potential problem is that the delta_time does not include the time that it takes to do the mouse, camera and movement updates. To fix this I think you should set the millis_previous_frame to the same value that you used when calculating the delta_time.

auto millis_current_frame = SDL_GetTicks();

double delta_time {
	(millis_current_frame - millis_previous_frame) / 1'000.f
};

...

millis_previous_frame = millis_current_frame;

Yeah it’s almost certainly how you calculate your delta time.

Calculate it like this:

Add a member field to your Game class:

float _last_time = 0.0f;

and in Game::update() calculate the delta time with:

const float now = static_cast<float>(SDL_GetTicks()) / 1000.0f;
const float delta_time = now - _last_time;
_last_time = now;

Do not call SDL_Delay() at all.

Then calculate movement with:

transform.position += rigid_body.velocity * delta_time;

(note that you can multiply vec2 by float so there is no need to decompose the fields).

Hi folks - thank you very much for your responses!

@Peter87 I think your suggestion is consistent with @trojanfoe’s example - specifically relating to only calling SDL_GetTicks() once - have I understood that correctly?

@trojanfoe thanks for the suggestion re. vec2 * float - that has simplified the code quite a bit

I’ve removed the SDL_Delay() and the movement is working as intended (e.g. at the correct speeds) - but I am still encountering the same, very slight stutter in the movement of sprites.

Is it possible that static_casting each sprite’s position elements (X, Y coordinates) into the integers required by SDL_RenderCopyEx is to blame?

Thankyou again for the help

Yes, snapping the positions to integers is almost certainly causing microstutters. You can try SDL_RenderCopyExF(), but the downside is that the sprites will blur slightly when they’re not on pixel boundaries.

I do want to mention that most SDL2 functions dealing with a SDL_Rect will also have an SDL_FRect ‘F’ counterpart, so you can switch to using SDL_FRect in SDL2 as your camera and object positions with just a bit of refactoring.

However, in SDL3, nearly all common functions with rectangle inputs use SDL_FRect as the expected rectangle type, so the int rounding problem is solved there as a default. This perhaps is a good time to switch the project to the new API.

Now you have two problems — stuttering and pointless burning the CPU. If your game needs for example 20% of CPU time to maintain a target framerate (e.g. 60fps), then it should use 20% and not a percent more. Generating hundreds or even thousands of frames when the screen will display only 60 or 144 is just ridiculous.

1 Like

You create your renderer with SDL_RENDERER_PRESENTVSYNC.

The calculations with microseconds and frames per seconds are not necessary.
The SDL_Delay is also not necessary.

Example for game loop without “stuttering” and flickering:

while (running) {
DoEventStuff();
FillRendererWithTextures();
SDL_RenderPresent(pRenderer);
SDL_RenderClear(pRenderer);
}

1 Like

The user can disable VSync in graphics card driver settings, and your code will burn CPU.

The user j_marvin created a renderer with SDL_RENDERER_PRESENTVSYNC. So I think the user will not disable it in the gfx card driver.

Indeed, the only thing you need to worry about is the possibility of frames being dropped.

There is a fundamental problem with the delta_time approach, which is that when rendering the various elements of your frame you need to position them where they should be when the frame is eventually presented on the monitor, not when you call SDL_RenderPresent() or when that function returns, or at any other point in your main loop.

There is simply no way of knowing precisely when the frame you are rendering will eventually be displayed, and therefore no way of positioning the elements in that frame to eliminate motion artefacts! All you know is that it will be at some time in the future.

Far better to enable SDL_RENDERER_PRESENTVSYNC then (barring variable-frame-rate monitors, which are rare) you at least know that your frames will be displayed at multiples of a well-defined period. You can then advance the time which you use to position the various elements by a fixed amount each frame (with possibly a test for dropped frames).

1 Like

I meant the end user (player), not the SDL user (game developer).

The variable delta was a popular technique for implementing the main loop about 20 years ago and it came with a ton of issues, including those with physics and collisions.

However, a completely different approach has been used for a long time, i.e. constant delta, which means that the logic is updated a predetermined number of times per second. However, to have smoother animations on the screen, interpolated rendering is used, which is based on a variable delta, but whose value is not greater than that defining one step of logic.


For example, the logic is updated 60 times per second. This means that the physics and movement of objects is updated at each step by predefined values. If we assume that the delta is 1.0 for each of the 60 steps of the logic, multiplying the object parameters by it is unnecessary. During rendering, based on the current time, you calculate the current moment as the place in time between the logic steps, so the delta value is between 0.0 and 1.0. This delta is used to interpolate objects during rendering to get a better match of objects to real time (smoother animations on the screen).

This way, firstly, physics and collision problems are avoided, and thanks to interpolated rendering, animations are smoother. Object interpolation during rendering is not about burning CPU/GPU for fun, but rather about providing the ability to render more frames than a fixed number of logic updates per second, so the game that updates its logic, for example, 60 times per second can render, for example, 144 unique frames on screen if the player has a monitor with a 144Hz refresh rate.

There is no guarantee that VSync (or G-Sync, FreeSync) will be enabled, even if you create a renderer with a flag to enable it, so the main game loop should be written in such a way that it is not dependent on it and does not render more frames than the current screen refresh. With VSync disabled, the downside should be screen tearing, not burning the CPU/GPU.


Rendering is usually a much more time-consuming process than updating logic. So that the logic can be updated a predetermined number of times, certain techniques are used to make it independent of lags and the load resulting from the time-consuming rendering. In the case of lags, several logic steps are performed within one iteration of the main loop, and then a frame is rendered on the latest state of the logic. In the case of very large lags, a protection against the so-called spiral of death is implemented (not several logic steps are performed, but only one, catching up with real time). In the case of network games, the matter is even more complicated.

Ultimately, the game has secured physics and collisions, and allows to render as many unique frames as the hardware can render or as many as the monitor’s refresh rate. Rendering more frames than the refresh rate makes absolutely no sense and is therefore a pure waste of resources.

Why is this stupid? Modern operating systems usually support hundreds of processes and thousands of threads, dozens of applications are running at any given time, and each one requires CPU time to function. In addition, most people do not have high-end computers with 20-core processors, so using all the available CPU time affects the performance and stability of other applications and the OS.

Secondly, using more CPU/GPU time than the game needs drastically affects power consumption (increased CPU clock speed or even enabling CPU turbo mode), and therefore reduces the battery life of laptops and mobile devices. And laptops not only run on batteries, but usually have weaker components and lower performance than traditional PCs.

1 Like

I’m curious/concerned about those 500Hz gaming monitors. I don’t intend to ever get one, but I’ve seen members of the YT gamer-tech crowd pushing them from time to time.

Let’s say they do catch on and many gamers (say 20%) have 500Hz screens, in that world what would be the most desired methods to adjust for timing?

Edit:
For my purposes (2D games, mostly hand drawn animations, non-networking), I would want to keep using Vsync at 60Hz.
I think the solution is to use SDL_SetWindowDisplayMode(), which would let me request the refresh rate be 60Hz. I’m pretty sure VSync would take the value from there.
This should probably be paired with SDL_GetClosestDisplayMode() to ensure I’m using a supported rate, and some double-checking that I am getting something reasonable.

I also see that the Desktop version of the function might need to be called when switching from fullscreen to desktop mode?

Ok, but If you create a main loop with framerate cap to 60 (regardless of VSync state), then you don’t need to worry about screen refresh rate at all. Such a loop should work in few steps:

  1. Calculate the current moment in time to know how many logic update steps need to be performed.
  2. Update logic calculated number of times.
  3. Render the game frame.
  4. Calculate the current moment in time and determine if you need to wait for the next logic update.
  5. If you need to wait, do it with one of the SDL_Delay* functions.

Since such a loop itself takes care of not processing more than a predefined number of frames (logic updates), the VSync state is irrelevant. However, if VSync is active, the higher the monitor refresh rate, the better, because the shorter the loop will wait for SDL_RenderPresent, so waiting for the screen update will have less impact on the loop’s operation. In other words, the higher the refresh rate, the less chance there will be delays and the need to perform several logic updates within one iteration of the main loop.

And since refresh rate doesn’t matter for such a loop, also you don’t have to worry about which mode to use for fullscreen. Besides, you shouldn’t use a predetermined one, because you don’t know what monitor(s) the player has, or what refresh rates are available. By default, use the mode with the native resolution and refresh rate, but you can (and should) give the player the option to choose any other supported by his monitor.

If you are making a simple game, perhaps with a retro feel, pixel art, etc., then 60fps and regular rendering will be enough for you (without objects interpolation). Just KISS.

1 Like

As you go on to say, you still need to be able to handle other refresh rates because you might not get the refresh rate that you requested. I don’t know how common it is for a monitor to not support 60 Hz but it could still be set to a different refresh rate and the refresh rate would only change if you used “true” (exclusive) fullscreen mode. In “desktop” (borderless window) fullscreen mode, which might provide a better user-experience, you would still get whatever refresh rate the OS settings dictates. Wayland doesn’t seem to allow applications to change the refresh rate at all.

I have seen monitors that have 59.something hz refresh rates.

Trying to hard code your framerate to anything is a bad idea, but having a limiter that keeps you from going over 200-300 FPS isn’t a bad idea. There are 244-hz monitors out there now.

Sometimes I miss the days when all but the lowest end CRT monitors could do at least 75-85hz.

This is not a problem, because in such a case and with VSync enabled, once every dozen or so seconds, the main loop will perform two logic updates in a row, catching up with real time. This accumulating micro-delay is not felt at all, is the same as with interpolated rendering.