I’m a bit new to audio development, so I need some help.
I have a parallel thread in my engine, that streams audio data generated by software synths. However, there’s no sound, and the test software grows in size quickly once the thread is initialized. While the first one is possibly on my own part of some NaN value mutes the output, the second one is a very concerning issue.
Here’s the pseudocode of my thread entry point:
def threadEntry()
while isRunning() == true
renderAudioToBuffers()
SDL_QueueAudio(getAudioDeviceID(), finalBuffer.ptr, finalBuffer.length * float.sizeof)
I have a feeling that this pushes way more data to the audio device that it’s able to handle, since there’s no scheduling. The documentation of the function SDL_QueueAudio isn’t very obvious in this regard.
Should I suspend the thread every time it queues a chunk of audio data, then resume when new audio data is needed? SDL_AudioCallback doesn’t work at all for me for some reason, and it is harder to make it work with D code.
On demand, I can link my engine’s code.