Audio latency ambiguity

Hi all,

Looking through the various audio backends, there’s one thing that
strikes me. While I can of course specify the number of samples that I
want per buffer, the number of buffers that are used internally seems to
be an implementation detail. This means that I don’t actually know the
final latency that I am achieving, and it could differ significantly
between backends even if I specify the exact same buffer size. In the
DirectSound backend, for instance, I see:

Uint32 chunksize = this->spec.size;
const int numchunks = 8;

While in the XAudio2 backend we have:

 this->hidden->mixlen = this->spec.size;
 this->hidden->mixbuf = (Uint8 *) SDL_malloc(2 * this->hidden->mixlen);

In short, two buffers in XAudio2 and 8 buffers in DirectSound. I haven’t
looked at the other implementations in detail yet.

My question is, how do I find/control the real latency? I want to be
able to say “give me x samples/milliseconds worth of latency and divide
it into whatever number of buffers you like”. If I don’t get the exact
amount of latency I ask for, that’s of course to be expected, but I
would like to know the total latency rather than just the size of a
single chunk where the number of chunks is unknown.

Is this a known problem, or am I misunderstanding this?

Kind regards,

Philip Bennefall

The intent is typically double buffered audio, as close as we can get with
the backend. Even though we allocate 8 chunks in the DirectSound case, we
race the read pointer so that you’re filling one chunk while the previous
one is being played.On Sat, Dec 12, 2015 at 4:11 PM, Philip Bennefall wrote:

Hi all,

Looking through the various audio backends, there’s one thing that strikes
me. While I can of course specify the number of samples that I want per
buffer, the number of buffers that are used internally seems to be an
implementation detail. This means that I don’t actually know the final
latency that I am achieving, and it could differ significantly between
backends even if I specify the exact same buffer size. In the DirectSound
backend, for instance, I see:

Uint32 chunksize = this->spec.size;
const int numchunks = 8;

While in the XAudio2 backend we have:

this->hidden->mixlen = this->spec.size;
this->hidden->mixbuf = (Uint8 *) SDL_malloc(2 * this->hidden->mixlen);

In short, two buffers in XAudio2 and 8 buffers in DirectSound. I haven’t
looked at the other implementations in detail yet.

My question is, how do I find/control the real latency? I want to be able
to say “give me x samples/milliseconds worth of latency and divide it into
whatever number of buffers you like”. If I don’t get the exact amount of
latency I ask for, that’s of course to be expected, but I would like to
know the total latency rather than just the size of a single chunk where
the number of chunks is unknown.

Is this a known problem, or am I misunderstanding this?

Kind regards,

Philip Bennefall


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Thanks for the quick response, Sam. Double buffering seems like the best
thing to aim for, but it would be great if I could have a way of finding
out the final latency that we actually got from the given backend - at
least the latency that SDL knows about. Basically a function to retrieve
the amount of time (either in samples, milliseconds or similar) that
must pass before the backend has actually pushed my audio to the
hardware. Is this doable?

Thanks!

Kind regards,

Philip BennefallOn 12/13/2015 8:12 AM, Sam Lantinga wrote:

The intent is typically double buffered audio, as close as we can get
with the backend. Even though we allocate 8 chunks in the DirectSound
case, we race the read pointer so that you’re filling one chunk while
the previous one is being played.

On Sat, Dec 12, 2015 at 4:11 PM, Philip Bennefall <@Philip_Bennefall mailto:Philip_Bennefall> wrote:

Hi all,

Looking through the various audio backends, there's one thing that
strikes me. While I can of course specify the number of samples
that I want per buffer, the number of buffers that are used
internally seems to be an implementation detail. This means that I
don't actually know the final latency that I am achieving, and it
could differ significantly between backends even if I specify the
exact same buffer size. In the DirectSound backend, for instance,
I see:

Uint32 chunksize = this->spec.size;
    const int numchunks = 8;

While in the XAudio2 backend we have:

    this->hidden->mixlen = this->spec.size;
    this->hidden->mixbuf = (Uint8 *) SDL_malloc(2 *
this->hidden->mixlen);

In short, two buffers in XAudio2 and 8 buffers in DirectSound. I
haven't looked at the other implementations in detail yet.

My question is, how do I find/control the real latency? I want to
be able to say "give me x samples/milliseconds worth of latency
and divide it into whatever number of buffers you like". If I
don't get the exact amount of latency I ask for, that's of course
to be expected, but I would like to know the total latency rather
than just the size of a single chunk where the number of chunks is
unknown.

Is this a known problem, or am I misunderstanding this?

Kind regards,

Philip Bennefall
_______________________________________________
SDL mailing list
SDL at lists.libsdl.org <mailto:SDL at lists.libsdl.org>
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org