Hi there,
I’ve been browsing through the SDL site and the examples to try to
figure whether SDL is appropriate for my application.
I’m basically synthesizing my own sound using an “additive
synthesis” technique, i.e. adding individual continuous sinusoids
that each have their own phase, frequency, and amplitude.
I am able to compute 10 seconds of sound and then output this to the
audio device simply via OSS. However, I need to be able to do this
in an “on-the-fly” manner, i.e. I cant be precalculating 10 seconds
of sound, but rather, I will be calculating a few milliseconds of
sound samples at a time and playing these as soon as they get
processed.
Works For Me™. 
http://audiality.org/
http://www.olofson.net/kobodl/
(Uses a predecessor of the above.)
http://olofson.net/examples.html
simplemixer
http://olofson.net/mixed.html
DT-42 DrumToy
speaker
All of the above use the SDL audio API directl; no SDL_mixer or
anything.
simplemixer and speaker are rather small and simple and might serve as
examples.
DT-42 deals with synchronization of the display with audio output with
significant latency, which is a must in such applications if you
can’t get “almost zero” latency output.
I just have to make sure that everything is real-time and sounds
continuous and that there are no clicks or anything between samples.
Well, there’s the problem… In my experience, this works fine on Mac
OS X out of the box, and Linux, as long as you avoid sound daemons. A
10-20 ms should work fine; below 5 ms on a properly tuned system with
a preemptive or low latency patched Linux kernel. That’s quite
sufficient for real time synthesis, at least if you take care to
timestamp and implement incoming MIDI events and the like properly.
(Maintain a fixed latency, rather than handling all events starting
at the first sample frame of each buffer.)
Whether you use SDL, or the underlying API (OSS, ALSA or CoreAudio)
directly doesn’t matter, as the layer is rather thin (minimal
overhead) and doesn’t involve any context switches or such.
However, I’ve never actually seen anything like low latency on Windows
using SDL. DirectSound seems to be able to provide acceptable
latencies with serious drivers (though many soft synths seem to use
various low level hacks to achieve this), and ASIO should work too
(designed specifically for low latency audio), but AFAIK, SDL doesn’t
support ASIO, and I don’t know if “pro audio” DirectSound drivers
would actually help SDL.
Now, considering that you mention OSS, I assume you’re not on Windows,
so maybe you won’t have to worry about this.
I am totally new to audio programming so if someone could help
direct me as to whether SDL or SDL_mixer (or maybe another library)
would be best suited for my needs (and why?) that woudl be great!
SDL_mixer and most other libs layer over the SDL audio API, so they
don’t really offer anything you’d need if you’re already doing the
synthesis in your code.
What you need to do is adapt your code to the callback model used by
SDL. For each buffer, generate audio for that buffer, based on the
current state of input.
Try to keep the CPU load as even as possible across buffers. Doing a
large window FFT or something like that every N buffers is a bad
idea, and will cause audio drop-out long before you’re anywhere near
utilizing the full CPU power.
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'On Thursday 21 September 2006 18:46, Louis Desjardins wrote: