How to minimize delay with SDL_QueueAudio()?

while (pointer2 < wavLength)
{
        //do some pointer work
	SDL_QueueAudio(deviceId, chunk, chunksize);
	std::this_thread::sleep_until(std::chrono::high_resolution_clock::now() + 

std::chrono::milliseconds(1000));
}

Hey,

I have this piece of code above. I compute chunk every second and give it to SDL_QueueAudio(). The thread sleep just simulates the computation. I compute 44100 samples per second and this is what the device expects.

The problem is: If I do it like this, I have interruptions while playing the sound. Now I do realize, that I need to compute the chunk data faster (so under one second), but actually I have to do it twice as fast (500ms), for not having any artifacts.

So my question. Can I minimize this somehow? Is there a better way?

I can push the computation to get faster (down two 800ms), but 500ms seems to long for my taste.

If you’re having to generate a new ‘one second’ chunk every 500 ms, it seems likely that you’ve miscalculated how long the chunk takes to play. If you are outputting 16-bit stereo a second would consist of 44100 * 2 * 2 = 176400 bytes.

But in any case I would prefer polling SDL_GetQueuedAudioSize() to discover when a new chunk is needed (or use a callback).