[…]
Ok, my question is if you could help me get rid of that jittering at
higher CPU loads. I guess someone with a more decent knowledge of
SDL audio internals might instantly know the answer.
What do you mean by jittering, more specifically? Are you getting
audio drop-outs, or is it a matter of varying event->audio output
latency?
How the thing works:
I have a
freq = 44100;
channels = 2; //stereo
format = AUDIO_S16;
samples = 2048;
SDL_AudioSpec (verified after initalizin, and it’s the same format
that audiere gives us)
In the audio callback I do the following:
- let audiere mix the next streamlen bytes of audio data
- put it in the sdl audio stream
Are you sure Audiere actually mixes the exact number of bytes you ask
for? If not, it may not actually render the same amount of data each
time you call it, which means you’ll start missing deadlines long
before you’re at 100% CPU load.
That’s exactly the same thing audiere does in the windows backend.
Same buffer size too?
A general purpose OS usually has hideous scheduling jitter, even for
so called “real time priority”, which severely restricts the latency
(buffer size) and CPU power available for real time DSP. Say you can
use 50% of the CPU time without problems with a certain buffer size.
Cut the buffer size in half, and you could end up getting drop-outs
even if you use virtually no CPU time att all. It does not scale
towards zero latency. It scales towards the buffer size that
corresponds to the worst case scheduling latency.
I guess the jittering is because of audiere taking too much time to
mix data (for sdl). However I’m worried because with the windows
backend everything works fine.
I can’t swear that SDL isn’t doing something funny, because I’ve been
having trouble with Audiality over the SDL audio backend all the
time, and never been able to figure out what’s going on…
However, this is on Windows only! On Linux, I get excellent
performance whether I use SDL audio or OSS. Even on old Linux kernels
that have worse real time performance than Windows, Audiality peforms
much better than on Windows.
But maybe the windows backend is written a way so that the audio
callback can run multiple times at once.
Not sure what you mean… What would that accomplish?
You need to generate exactly the number of samples the callback needs
- and the only way to minimizi the sensitivity to scheduling jitter
is to keep the CPU load as steady as possible.
Do you have any idea what could be the problem? Or what causes audio
jittering under SDL in general.
Well, from what I’ve seen, SDL audio works great on Linux, whereas on
Windows, there are serious latency problems - by musical synthesis
standards, at least. (I’d say anything above 10 ms is unacceptable
for interactive real time synhesis, but you don’t need to get
anywhere near those figures in games. In fact, if you do, you’ll need
to delay the sound effects so they aren’t played before the user can
see the visual events!)
I don’t know if the latency problems are caused by Windows, SDL or the
combination, but I suspect that it’s mostly a Windows issue. To get
anywhere near the “standard” Linux, BeOS and Mac OS latencies on
Windows, you need to use ASIO, EASI or similar “bypass” audio API
instead of DirectSound. (KernelStreams might work, but only if you
disable the internal software mixer.)
One thing you might try is to log and analyze timestamps from various
places, such as when you enter and leave the SDL audio callback. (Use
x86 asm RDTSC, Win32 performance counters or something. You might
get away with ms accuracy when looking for critical latency peaks,
but you’ll get more interesting data with ?s accuracy or better.)
//David Olofson - Programmer, Composer, Open Source Advocate
.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
— http://olofson.net — http://www.reologica.se —On Saturday 12 November 2005 22.13, Florian Hufsky wrote: