SDL audio callback precision

I’m confused as to how the SDL audio callback works.

I have set it up with a sample rate of 2048, and each time it calls back to my code it asks for 2048 samples. I use that sample rate to control timing of queuing up wave files (similar to DT42). And the whole thing sounds fine: if I sequence out a beat at 120 bpm and play them using the callback logic and my scheduler, everything triggers at the right time.

But if I throw a std::chrono timer into the callback loop, I get wildly different times between calls to the callback. 90ms, 50ms, 40ms, etc.

That doesn’t make any sense to me. How can a callback asking for a fixed amount of data but inconsistently called produce consistent audio?

This is a big problem for me right now because I am trying to synchronize visuals with my audio playback. My render thread runs much faster than my audio thread, so if I use a ‘beat time’ updated based on the number of samples requested by the DSP, I get stuttering (because it only updates the time every 2048 samples, apparently at an inconsistent rate).

System timers or SDL timers typically only guarantee that the timer will fire after a given time period/point has passed and not before, but make no promises they will fire at exactly the time duration/point requested (due to the nature of preemptive multitasking operating systems) and will usually fire a little later. Sometimes they can fire much later. Some operating systems offer low-latency timers or whatever meant for audio and video.

Custom timers based on using SDL_Delay() or std::this_thread::sleep_for() will suffer from the same problems as using the operating system’s timers.

edit: so I would suggest using some other method than apparently trying to use std::chrono to insert a delay in your audio callback to maintain sync. Look at SDL_GetPerformanceCounter() or std::chrono::high_resolution_clock in your callback to measure time elapsed.