Ah, there we go; that’s the core misunderstanding going on.
Modern games generally get written using one of two broad styles to cope with exactly that problem of displays running at different rates. (caveat: all of the code below is being written directly into this post and has never been compiled or tested; I’ve probably made some typos in it! But it’s just supposed to get the broad ideas across)
Type 1: “Variable Timestep”
In this approach, a game measures the time between frames using a high precision clock, and then you have your game simulation use that time to determine how far ahead to run the simulation. With SDL code, your game loop would look something like this:
while ( !quitting )
{
// note that "ticks" is an unknown duration of time which will vary across different hardware. All we
// really know about it for certain is that it starts at 0 when the game is launched and goes up from
// there over time. SDL_GetPerformanceFrequency() tells us how to convert it into seconds.
uint64_t ticks_now = SDL_GetPerformanceCount();
// how long has it been since the last frame was rendered? This is how much time our simulation needs to run!
float timestep = (float)(ticks_now - ticks_last_frame) / SDL_GetPerformanceFrequency();
ticks_last_frame = ticks_now;
// this object moves according to its velocity, which is expressed as a distance *per second*
// that it travels, regardless of frame rate. Acceleration would affect velocity in the same way.
object.position += object.velocity * timestep;
Game_Render();
}
This approach has the chief benefit of being easy to implement and relatively straightforward. The only real downside to it is that intensive physics simulations can sometimes go a little quirky if you don’t give them the same size timestep every frame. (it’s quite common for physics engines to run their own little “fixed timestep” system inside a mostly-variable timestep game to deal with that problem)
Type 2: “Fixed Timestep”
Under this approach, you declare that your game simulation is only going to run at a particular frame rate, and you instead run your simulation a variable number of times per rendered frame. It looks something like this:
while( !quitting ) {
uint64_t ticks_now = SDL_GetPerformanceCount();
const uint64_t ticks_per_frame = SDL_GetPerformanceFrequency() / 60; // lets run our simulation at a fixed 60fps
const float c_fixedTimestep = 1.0f / 60.0f;
if ( ticks_now < ticks_last_frame + ticks_per_frame ) {
// consider a small SDL_Delay() here to let the CPU sleep a bit? We seem to be running fast.
// regardless, no need to update anything or render a frame right now; let's go around the loop
// again until it's time to do an update!
} else {
// enough time has passed for us to update the simulation state and render a frame.
while ( ticks_now > ticks_last_frame + ticks_per_frame ) {
// here we always update our object's position according to c_fixedTimestep,
// and if we're rendering slowly we'll update our simulation as often as we need to
// in order to catch up.
object.position += object.velocity * c_fixedTimestep;
ticks_last_frame += ticks_per_frame;
}
Game_Render();
}
}
The chief benefit of using a fixed timestep is that it makes physics systems really happy and it makes everything determinate and repeatable; if you ignore user inputs then your simulations will run the same way every time, because the simulation gets the same inputs every time; there’s no messy “how long has it been since the last frame” in the middle of your simulation here, the way there is when using a variable timestep.
The downside is that you can wind up in what people call a “death spiral” if your simulation takes more time to run than the time it simulated. That is, if it takes you 0.033 seconds to simulate 0.016 of time, then you’ll never catch up and your game will eventually appear to freeze. (lots of games institute a maximum number of times around that simulation loop they’ll go per frame, so the game just appears to slow down instead of appearing to freeze in that situation)
The other major downside is kind of what we were talking about earlier; if you’re only displaying 60 new frames per second on a screen that updates its image at 75 frames per second, then those 60 frames won’t all appear for the same length of time on the monitor; it’ll look strange and stuttery, as if you were dropping frames. (which you kind of are, albeit intentionally, if you think about it)
The usual approach to address this is to decouple your rendering from the simulation entirely; set things up so your simulation runs at one rate, but then your rendering can run at a different rate, rendering your game stuff in between the positions specified by the simulation so that your rendering remains smooth when updated at 75fps or 300fps even when the simulation is only running at 60hz or even lower. All of that is a big and complicated topic, and folks usually point to resources like Fix Your Timestep! | Gaffer On Games for the gory details.
Conclusion
Hopefully that all makes sense! For the vast majority of folks the recommendation is to go with a variable timestep. It’s simple and easy and will give you good results the majority of the time for the majority of games. In twenty years in the mainstream industry, I have only once worked on a game which used fixed timesteps for everything (and it was a pain to work with, but I understand why that specific project needed them).