Odd iOS/Metal framerate/vsync issue

I’m developing a video player and I’ve come across something that has me scratching my head.

The frame durations on iOS are fractionally higher than I expect, leading to occasional but regular frame drops with my player.

Thing is, same code works fine on macOS.

I use an std::chrono::steady_clock::time_point for my measurements (I’ve also tested with mach_time just in case, but that gave the same results), and I find that on average for a supposed 60Hz refresh rate, the duration between RenderPresent calls differs every so slightly between macOS and iOS:

Average frame duration 16666.270459
Average frame duration 16675.307296

The average fluctuates a tiny bit from the above, but iOS refreshes seem to take 9us more than macOS.

So over one minute, that’s 9 x 60 x 60 = 32400us, which is about a couple of frames dropped on average per minute for a 60fps video.

I have no idea what I am doing wrong. I seem to be able to get these same measurements from a simple program that does nothing but set the render target, draw a dot, and present it. I’ve reproduced it on an iPhone X and similarly on an iPad Pro which exhibits the same margin of error for its 120Hz refresh rate.

On macOS I use the enable vsync flag, but on iOS this doesn’t do anything because looking at the SDL code, vsync is always on anyway.

Has anyone else noticed anything like this?

The second figure you quote is actually closer to the ‘NTSC’ frame rate of 60 * 1000 / 1001 (59.94005994… fps) than to 60. The nominal periods are 16.666666… ms for 60 fps and 16.683333… ms for ‘NTSC’. I would expect a video player to be able to cope with both frame rates, plus a tolerance for expected variation in crystal frequency etc.

Thanks, it’s still not refreshing at 59.94 though, even if it’s closer to it than 60Hz. I fed it a 59.94 video source as this was my initial thought, that the displays on these devices are 59.94, but nope, you get a similar effect in that the refresh rate still doesn’t match the video rate.

The way I have it working is I calculate the expected presentation time from the video frame pts, and then modify this so that it falls halfway through a refresh. This is so that if a frame arrives fractionally late or early, it still gets drawn at the right time. So the first frame gets displayed around 8.3ms after the last refresh time. But what I find is this figure of how far into the refresh interval the frame is displayed at (8.3) gradually changes over the course of a minute, because the refresh rate is marginally wrong. On macOS, tvOS it works just fine, because the refresh rate is as I expect. On iOS it’s weirdly slightly different. I don’t know if this is just the way the hardware works or something about SDL2. Maybe I’ll make some simple non-SDL sample app and see what that yields.

I’m not convinced that the frequency variation you are experiencing is any more than might be expected given the cheap crystal oscillators used in mobile devices. It may well be that the Mac’s frame rate is more accurate because it contains a higher quality (and physically larger) quartz crystal.

Yeah maybe. It’s consistently different that’s for sure. I can repeat the test and leave it running for several minutes and get the same results. It’s always tends towards a 9us difference.

Maybe it’s something that nobody notices!

This is interesting. I haven’t tried it, but it mentions a max iOS refresh of 59.97 which tallies exactly with my calculations.

So just to wrap this up in case anyone is ever interested, it does indeed appear that at least some iOS devices max out to a refresh rate of 59.97.

Should anyone else ever come across this discrepancy and want to verify, here is a simple code snippet that could be placed after the render present call. It waits for 5s before starting to count to discount any slow refreshes during app startup.

The output of this locks to 60.0 for macOS and 59.97 for iOS on an iPhone X.

		std::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();
	static std::chrono::steady_clock::time_point t = now;
	static bool started = false;
	static uint64_t total = 0;
	static uint64_t count = 0;
	if (!started)
	{
		if (std::chrono::duration_cast<std::chrono::seconds>(now - t).count() > 5)
		{
			count = 0;
			total = 0;
			t = now;
			started = true;
		}
	}
	else
	{
		total += std::chrono::duration_cast<std::chrono::nanoseconds>(now - t).count();
		count ++;
		t = now;
		printf ("Avg frame rate %2.2f\n", 1000000000.0 / ((double)total / (double)count));
	}