Hello!
I am starting to work on a VoIP/video chat application as a project, and to learn a few things. I think SDL 1.3 looks ideal for my needs (even though it’s still in development), because it appears from the API documentation that it allows hardware-accelerated streaming texture uploads and color-space conversion (which avoids me having to mess with PBOs and ARB fragment shaders in OpenGL). Please correct me if I’m wrong here.
However, I'm a bit confused by the somewhat sparsely documented audio callback function. I understand the audio driver will call the function to get a buffer of samples, which it will then play. However, because I want to reduce latency and processor overhead as much as possible while having tight audio, I need to figure out when it will need those samples, and how best to populate the buffer it hands me.
Firstly, how much execution time is reasonable for the audio callback function? If it buffers ahead a fair ways, I could call my audio decoder and populate the buffer directly from it within the callback function, but if it is expected that the callback will return quickly, I need to decode the audio somewhere else.
Also, I want to be able to sync video to the audio clock, essentially delaying or skipping frames based on audio timing (again, to minimize latency). It seems reasonable that I should get the timestamp of the audio segment being passed in the callback function, and update some global variable, so the video (probably running in a different thread, once I get that figured out) can know when to refresh, or skip a frame. However, this again depends on how much buffering is going to be done within SDL/the audio device.
Does anyone know about the details involved in this?
Thanks in advance for your help!