I meant the end user (player), not the SDL user (game developer).
The variable delta was a popular technique for implementing the main loop about 20 years ago and it came with a ton of issues, including those with physics and collisions.
However, a completely different approach has been used for a long time, i.e. constant delta, which means that the logic is updated a predetermined number of times per second. However, to have smoother animations on the screen, interpolated rendering is used, which is based on a variable delta, but whose value is not greater than that defining one step of logic.
For example, the logic is updated 60 times per second. This means that the physics and movement of objects is updated at each step by predefined values. If we assume that the delta is 1.0
for each of the 60 steps of the logic, multiplying the object parameters by it is unnecessary. During rendering, based on the current time, you calculate the current moment as the place in time between the logic steps, so the delta value is between 0.0
and 1.0
. This delta is used to interpolate objects during rendering to get a better match of objects to real time (smoother animations on the screen).
This way, firstly, physics and collision problems are avoided, and thanks to interpolated rendering, animations are smoother. Object interpolation during rendering is not about burning CPU/GPU for fun, but rather about providing the ability to render more frames than a fixed number of logic updates per second, so the game that updates its logic, for example, 60 times per second can render, for example, 144 unique frames on screen if the player has a monitor with a 144Hz refresh rate.
There is no guarantee that VSync (or G-Sync, FreeSync) will be enabled, even if you create a renderer with a flag to enable it, so the main game loop should be written in such a way that it is not dependent on it and does not render more frames than the current screen refresh. With VSync disabled, the downside should be screen tearing, not burning the CPU/GPU.
Rendering is usually a much more time-consuming process than updating logic. So that the logic can be updated a predetermined number of times, certain techniques are used to make it independent of lags and the load resulting from the time-consuming rendering. In the case of lags, several logic steps are performed within one iteration of the main loop, and then a frame is rendered on the latest state of the logic. In the case of very large lags, a protection against the so-called spiral of death is implemented (not several logic steps are performed, but only one, catching up with real time). In the case of network games, the matter is even more complicated.
Ultimately, the game has secured physics and collisions, and allows to render as many unique frames as the hardware can render or as many as the monitor’s refresh rate. Rendering more frames than the refresh rate makes absolutely no sense and is therefore a pure waste of resources.
Why is this stupid? Modern operating systems usually support hundreds of processes and thousands of threads, dozens of applications are running at any given time, and each one requires CPU time to function. In addition, most people do not have high-end computers with 20-core processors, so using all the available CPU time affects the performance and stability of other applications and the OS.
Secondly, using more CPU/GPU time than the game needs drastically affects power consumption (increased CPU clock speed or even enabling CPU turbo mode), and therefore reduces the battery life of laptops and mobile devices. And laptops not only run on batteries, but usually have weaker components and lower performance than traditional PCs.