I’m developing a video player and I’ve come across something that has me scratching my head.
The frame durations on iOS are fractionally higher than I expect, leading to occasional but regular frame drops with my player.
Thing is, same code works fine on macOS.
I use an std::chrono::steady_clock::time_point for my measurements (I’ve also tested with mach_time just in case, but that gave the same results), and I find that on average for a supposed 60Hz refresh rate, the duration between RenderPresent calls differs every so slightly between macOS and iOS:
Average frame duration 16666.270459
Average frame duration 16675.307296
The average fluctuates a tiny bit from the above, but iOS refreshes seem to take 9us more than macOS.
So over one minute, that’s 9 x 60 x 60 = 32400us, which is about a couple of frames dropped on average per minute for a 60fps video.
I have no idea what I am doing wrong. I seem to be able to get these same measurements from a simple program that does nothing but set the render target, draw a dot, and present it. I’ve reproduced it on an iPhone X and similarly on an iPad Pro which exhibits the same margin of error for its 120Hz refresh rate.
On macOS I use the enable vsync flag, but on iOS this doesn’t do anything because looking at the SDL code, vsync is always on anyway.
Has anyone else noticed anything like this?