Accurate audio timing

Hi Folks;

I have a design question relating to a timing/dance based game I might
return to work on. Such a game relies on sound/music playing with high
degrees of accuracy since the ear can easily pick up millisecond errors.
The basic gameplay would have a metronome with the user required to perform
actions in time with it.

I played around with SDL some time ago (over a year) to try and get this
working and ran into problems generating an metronome with wav samples.

In short, if I want to make a reliably timed, cross-platform metronome,
which route should go; threads, callback, something else?

Sorry for the unspecific nature of my question;
any thoughts/suggestions are appreciated

best regards,
richard

[…]

In short, if I want to make a reliably timed, cross-platform
metronome, which route should go; threads, callback, something else?

For the low level side, just use the callback. Throwing extra threads
in the mix is just asking for trouble, and doesn’t really add
anything in most cases.

Timing can be derived from SDL_GetTicks(). It’s “only” millisecond
accurate, but unless you’re running an RTOS or a properly
configured “multimedia OS” (BeOS, OS X, Linux/lowlatency, preemptive
Linux 2.6 etc), your input event timing won’t be nearly as accurate
anyway.

Basically, what you do is timestamp events as they arrive, and then
you calculate start times (audio buffer offsets) based on those
timestamps and the callback timing.

Theoretically, you can just timestamp the callbacks with
SDL_GetTicks(), but as there will likely be substantial scheduling
jitter in most cases, you should probably calculate callback
timestamps based on the sample rate instead, and just use
SDL_GetTicks() to nudge the “running audio time” to stay in sync.

The real problem, however, is not input timing or input/audio sync,
but rather that SDL has no way of reporting the actual
callback->output latency. The best you can do is a qualified guess
based on the SDL audio buffer size you get (which may differ from
what you ask for!), and it might be a good idea to provide some way
for advanced users to tweak it manually.

All that said, note that you don’t really need this for games
generally; a constant latency below some 50 ms is sufficient, and you
can’t do anything about the latency anyway. (Can’t count on anything
below 100 ms or so working on every system, so you need to make it
user configurable to go any lower.) The problem with musical
applications (as you’ve probably realized), is that in addition to
constant latency, they need to synchronize and/or correlate input
events, internal events and video output with audio output.

You might want to look at DT-42, which is a minimalistic tracker style
drum machine/sequencer that deals with most of these issues - or at
least, tries to, within the limits of the SDL API:
http://olofson.net/mixed.html

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Friday 04 January 2008, Richard Henwood wrote:

David Olofson wrote:

[…]

In short, if I want to make a reliably timed, cross-platform
metronome, which route should go; threads, callback, something else?

Thanks for the detailed response. It has clarified a number of grey areas
for me.

My other idea for an approach was to delegate the audio completely to a
library, like libmikmod and have it play ‘metronome.mod’. I would then ask
the library to tell me when a beat was playing, rather than call the beat
from SDL.

However, since libmikmod does not apprently have a callback or other nice
mechanism to do this, my search continues…

You might want to look at DT-42, which is a minimalistic tracker style
drum machine/sequencer that deals with most of these issues - or at
least, tries to, within the limits of the SDL API:
http://olofson.net/mixed.html

… and DT-42 looks like a good candidate for study. I’ll check that out.

Thanks again for your help
r,> On Friday 04 January 2008, Richard Henwood wrote:

Hi Folks;

I have a design question relating to a timing/dance based game I might
return to work on. Such a game relies on sound/music playing with high
degrees of accuracy since the ear can easily pick up millisecond errors.
The basic gameplay would have a metronome with the user required to perform
actions in time with it.

I played around with SDL some time ago (over a year) to try and get this
working and ran into problems generating an metronome with wav samples.

In short, if I want to make a reliably timed, cross-platform metronome,
which route should go; threads, callback, something else?

Sorry for the unspecific nature of my question;
any thoughts/suggestions are appreciated

Create a long looped track of a metronome. Not just one tick, but
several minutes long and loop it. Or it can be generated through the SDL
sound call back so that its rate can be varied. That will get the sound
out at the rate you want with very little if any random delay between
ticks. The trick is to keep the sound running constantly, not starting
and stopping it at the times when you want a tick. Then the rest of the
program can deal with any delays in timing caused by the OS. That means
you will have to be fairly tolerant of the timing of actual input
events.

The alternative would be to use a true real time OS or code directly to
the metal, but that defeats the idea of having a cross platform game.

Bob PendletonOn Fri, 2008-01-04 at 08:41 +0000, Richard Henwood wrote:

best regards,
richard


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±-------------------------------------+