Audio - understanding the absolute buffer size

In a similar vein, would it make sense to make the number of periods an input to the SDL_AudioSpec?

Also, with the new audio queuing API, is there any interest in supporting blocking behavior? Essentially, making SDL_QueueAudio block until there’s room enough in the audio buffer to write out the data. This is immensely helpful for my problem domain (emulation), where the execution speed often is tied to the audio clock.

In my ideal world, I’d be able to configure a buffer of say 1024 frames with 4 periods that are each 256 frames which I can enqueue to in a blocking manner. I’d like a larger overall buffer to help with long frames caused by excessive code compiling, etc. but smaller periods to provide more frequent interrupts to tell the emulator to resume execution in order to have more consistent frame pacing.

I currently simulate my ideal world of top of SDL’s callback API, but it’d be great if I could take advantage of having SDL handle the n buffering, so I can enqueue multiple periods ahead of time in case the callback is latent.

Sorry this turned into a bit of a long ramble - I’m just trying to get a feel for what your direction with the API is, and if there’s any overlap that I could help contribute to.