Audio Callback function requiring better docs?

Audio Callback function requiring better docs ?

Will the len of bytes sought always be the same for each callback so long as the size of desired.samples remains constant ?

The only way I can envision calculating time according to audio under SDL is to assume that the callback happens within a very small tolerance of when the last byte was emptied out of the buffer before the callback occurred and that futhermore, that last byte will very “SOON” hit the actual speakers. These assumptions seem only “reasonable” for small buffer sizes.

I’m considering using SDL_Audio and SDL_Video for some sample media player code, but cannot use SDL_Audio with much confidence until I can use it as a timekeeper. If in practice I can calculate within 1/10 of a second at what time a sample should hit a speaker, then I could probably get excited about SDL_Audio as a cross platform video tool.

So what I want to know is … who else is thinking about these things and are there any detailed samples or docs that could help me achieve this goal of timing according to audio played under SDL ?

Maybe I’m just supposed to determine experimentally is SDL_Audio is good enough for me. From some of the responses to my prior thread, I’m holding out hope.

I really do want to do this on Mac, PC, and Linux if possible. But Mac OS X is my current concern.

This wouldn’t work, since you would have effectively no time at all
to generate the buffer before there is a drop-out. The driver and/or
hardware needs something to do while you’re processing. (And do note
that heay audio applications can spend almost as long generating a
buffer is it takes to play it back!)

The way it’s normally handled depends on platform and driver API;

For shared memory based I/O:
The callback is activated as soon as there is room for a
full buffer in the shared DMA buffer.

For read()/write/( based I/O:
The callback is activated as soon as the previous buffer
has been written to the audio driver. That is, the
"write()" call returns as soon as the driver has managed
to copy the data into the DMA buffer.

So, in short, you can expect to have either N-1 (shared buffer) or N
(write()) buffers ahead of you right when the callback starts.
Problem is you can’t be sure about N… heh

One would assume that that N == 2, and one could wish that it’s 3
(better for low latency audio, as it cuts you some more slack in
relation to total latency than 2 buffers) - but I’m not sure what N
is in SDL, or if it’s even concistent.

The source is there, though. :slight_smile:

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
---------------------------> http://olofson.net/audiality -' .- M A I A -------------------------------------------------. | The Multimedia Application Integration Architecture |----------------------------> http://www.linuxdj.com/maia -’
http://olofson.nethttp://www.reologica.se —On Tuesday 10 December 2002 21.41, Mark Whittemore wrote:

Audio Callback function requiring better docs ?

Will the len of bytes sought always be the same for each callback
so long as the size of desired.samples remains constant ?

The only way I can envision calculating time according to audio
under SDL is to assume that the callback happens within a very
small tolerance of when the last byte was emptied out of the buffer
before the callback occurred and that futhermore, that last byte
will very “SOON” hit the actual speakers. These assumptions seem
only “reasonable” for small buffer sizes.