SDL useful for sound synthesis?

Hi there,

I’ve been browsing through the SDL site and the examples to try to figure
whether SDL is appropriate for my application.

I’m basically synthesizing my own sound using an "additive synthesis"
technique, i.e. adding individual continuous sinusoids that each have their
own phase, frequency, and amplitude.

I am able to compute 10 seconds of sound and then output this to the audio
device simply via OSS. However, I need to be able to do this in an
"on-the-fly" manner, i.e. I cant be precalculating 10 seconds of sound, but
rather, I will be calculating a few milliseconds of sound samples at a time
and playing these as soon as they get processed.

I just have to make sure that everything is real-time and sounds continuous
and that there are no clicks or anything between samples.

I am totally new to audio programming so if someone could help direct me as
to whether SDL or SDL_mixer (or maybe another library) would be best suited
for my needs (and why?) that woudl be great!

Thanks again,

LD_________________________________________________________________
Find a local pizza place, music store, museum and more?then map the best
route! http://local.live.com

I think that’s the clasic problem trying to stream anything, regardless if
audio or video.

Have you tried to make yourself a “circular” buffer for audio?

Play in one half your precalculated 1second audio sequence for example.
Durring this, calculate/prepare your second sequence in the 2nd half buffer.

How to do this without clicks and ugly stuff? Use the treading posibility of
SDL,
create a thread(there are tutorials about this). You should play from within
that tread.
By using a locking mechanism for the buffer (a mutex), you may see from your
main
program when a half-buffer becomes empty. Pass to the tread the other
half-buffer and
start again your calculation in the “empty” buffer.

Try to use small bufers, (2- or 8 seconds long play duration?) and use
"sleep" between
your sequences of calculations.

I don’t know, I just hope you could catch the ideea (I don’t think it is
mine at all :slight_smile: ),

I didn’t tried yet but I’ll have to do too something similar in future for
myself.On 9/21/06, Louis Desjardins <lost_bits1110 at hotmail.com> wrote:

Hi there,

I’ve been browsing through the SDL site and the examples to try to figure
whether SDL is appropriate for my application.

I’m basically synthesizing my own sound using an "additive synthesis"
technique, i.e. adding individual continuous sinusoids that each have
their
own phase, frequency, and amplitude.

I am able to compute 10 seconds of sound and then output this to the audio
device simply via OSS. However, I need to be able to do this in an
"on-the-fly" manner, i.e. I cant be precalculating 10 seconds of sound,
but
rather, I will be calculating a few milliseconds of sound samples at a
time
and playing these as soon as they get processed.

I just have to make sure that everything is real-time and sounds
continuous
and that there are no clicks or anything between samples.

I am totally new to audio programming so if someone could help direct me
as
to whether SDL or SDL_mixer (or maybe another library) would be best
suited
for my needs (and why?) that woudl be great!

Thanks again,

LD


Find a local pizza place, music store, museum and more?then map the best
route! http://local.live.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


george

Hi there,

I’ve been browsing through the SDL site and the examples to try to
figure whether SDL is appropriate for my application.

I’m basically synthesizing my own sound using an “additive
synthesis” technique, i.e. adding individual continuous sinusoids
that each have their own phase, frequency, and amplitude.

I am able to compute 10 seconds of sound and then output this to the
audio device simply via OSS. However, I need to be able to do this
in an “on-the-fly” manner, i.e. I cant be precalculating 10 seconds
of sound, but rather, I will be calculating a few milliseconds of
sound samples at a time and playing these as soon as they get
processed.

Works For Me™. :wink:

http://audiality.org/

http://www.olofson.net/kobodl/
	(Uses a predecessor of the above.)

http://olofson.net/examples.html
	simplemixer

http://olofson.net/mixed.html
	DT-42 DrumToy
	speaker

All of the above use the SDL audio API directl; no SDL_mixer or
anything.

simplemixer and speaker are rather small and simple and might serve as
examples.

DT-42 deals with synchronization of the display with audio output with
significant latency, which is a must in such applications if you
can’t get “almost zero” latency output.

I just have to make sure that everything is real-time and sounds
continuous and that there are no clicks or anything between samples.

Well, there’s the problem… In my experience, this works fine on Mac
OS X out of the box, and Linux, as long as you avoid sound daemons. A
10-20 ms should work fine; below 5 ms on a properly tuned system with
a preemptive or low latency patched Linux kernel. That’s quite
sufficient for real time synthesis, at least if you take care to
timestamp and implement incoming MIDI events and the like properly.
(Maintain a fixed latency, rather than handling all events starting
at the first sample frame of each buffer.)

Whether you use SDL, or the underlying API (OSS, ALSA or CoreAudio)
directly doesn’t matter, as the layer is rather thin (minimal
overhead) and doesn’t involve any context switches or such.

However, I’ve never actually seen anything like low latency on Windows
using SDL. DirectSound seems to be able to provide acceptable
latencies with serious drivers (though many soft synths seem to use
various low level hacks to achieve this), and ASIO should work too
(designed specifically for low latency audio), but AFAIK, SDL doesn’t
support ASIO, and I don’t know if “pro audio” DirectSound drivers
would actually help SDL.

Now, considering that you mention OSS, I assume you’re not on Windows,
so maybe you won’t have to worry about this.

I am totally new to audio programming so if someone could help
direct me as to whether SDL or SDL_mixer (or maybe another library)
would be best suited for my needs (and why?) that woudl be great!

SDL_mixer and most other libs layer over the SDL audio API, so they
don’t really offer anything you’d need if you’re already doing the
synthesis in your code.

What you need to do is adapt your code to the callback model used by
SDL. For each buffer, generate audio for that buffer, based on the
current state of input.

Try to keep the CPU load as even as possible across buffers. Doing a
large window FFT or something like that every N buffers is a bad
idea, and will cause audio drop-out long before you’re anywhere near
utilizing the full CPU power.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 21 September 2006 18:46, Louis Desjardins wrote:

[…]

I think that’s the clasic problem trying to stream anything,
regardless if audio or video.

Real time sound synthesis is quite different from streaming, in that
you essentially want zero latency. That is, you can’t use any
buffering beyond what the audio subsystem needs to avoid glitches, or
you lose the whole point of trying to keep the audio output latency
low in the first place.

[…]

How to do this without clicks and ugly stuff? Use the treading
posibility of SDL,
create a thread(there are tutorials about this). You should play
from within that tread.

I would strongly recommend not using any extra threads for low
latency audio on most platforms. It increases the risk of operating
system “coffee breaks” interfering with your audio, and you need
additional buffering between each pair of threads in the chain,
adding to the total latency.

SDL already provides an audio thread on platforms that need it, and
your best bet is to do all audio processing in that thread, that is,
in the context of the callback from SDL audio.

That said, there are low latency audio subsystems that use multiple
threads or even processes to great effect (http://jackaudio.org/),
but getting that to work reliably can be hard, if at all possible, on
some platforms. “Don’t try this at home!” :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 21 September 2006 19:12, Waltzer George wrote:

yep, my mistake. I just confused MIDI with digital (wav format), I thought
he is doing some calculation and in the end snends the wav to the port.

For threads, yes, I forgot that the audio part has already one (using the
callback)
Well, to have such latencies controled, perhaps he should try then an OS
like RTOS or the like among with a good card driver :slight_smile:

long live SDL…On 9/21/06, David Olofson wrote:

On Thursday 21 September 2006 19:12, Waltzer George wrote:
[…]

I think that’s the clasic problem trying to stream anything,
regardless if audio or video.

Real time sound synthesis is quite different from streaming, in that
you essentially want zero latency. That is, you can’t use any
buffering beyond what the audio subsystem needs to avoid glitches, or
you lose the whole point of trying to keep the audio output latency
low in the first place.

[…]

How to do this without clicks and ugly stuff? Use the treading
posibility of SDL,
create a thread(there are tutorials about this). You should play
from within that tread.

I would strongly recommend not using any extra threads for low
latency audio on most platforms. It increases the risk of operating
system “coffee breaks” interfering with your audio, and you need
additional buffering between each pair of threads in the chain,
adding to the total latency.

SDL already provides an audio thread on platforms that need it, and
your best bet is to do all audio processing in that thread, that is,
in the context of the callback from SDL audio.

That said, there are low latency audio subsystems that use multiple
threads or even processes to great effect (http://jackaudio.org/),
but getting that to work reliably can be hard, if at all possible, on
some platforms. “Don’t try this at home!” :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


george

Hi everyone!

Thank you so much for your responses! This will surely help me a lot, I
truly appreciate it.
Just one last thing…! :s

I guess when browsing through the SDL examples I was under the impression
that most SDL users who use its sound capability already have their wav
files (or whatever the format their audio is in) and just play this back and
maybe add some effects or something. This is why I wasn’t sure if I could
use SDL to feed in my very own synthesized sound (i.e. simply a binary array
of short int’s or something) and have it play on-the-fly (i.e. soon after
each sound chunk is processed) and in real-time.

Since I’m not doing game programming, but rather I am making a simulation
program, I was considering OpenAL - which is a library just for audio. Does
anyone know if OpenAL might be more advantageous/disadvantageous for what
I’m trying to do?

Thanks again! Hopefully I can be the expert one day and help others!

LD>From: David Olofson

Reply-To: “A list for developers using the SDL library.
(includesSDL-announce)”
To: "A list for developers using the SDL library. (includes
SDL-announce)"
Subject: Re: [SDL] SDL useful for sound synthesis?
Date: Thu, 21 Sep 2006 20:09:21 +0200

On Thursday 21 September 2006 19:12, Waltzer George wrote:
[…]

I think that’s the clasic problem trying to stream anything,
regardless if audio or video.

Real time sound synthesis is quite different from streaming, in that
you essentially want zero latency. That is, you can’t use any
buffering beyond what the audio subsystem needs to avoid glitches, or
you lose the whole point of trying to keep the audio output latency
low in the first place.

[…]

How to do this without clicks and ugly stuff? Use the treading
posibility of SDL,
create a thread(there are tutorials about this). You should play
from within that tread.

I would strongly recommend not using any extra threads for low
latency audio on most platforms. It increases the risk of operating
system “coffee breaks” interfering with your audio, and you need
additional buffering between each pair of threads in the chain,
adding to the total latency.

SDL already provides an audio thread on platforms that need it, and
your best bet is to do all audio processing in that thread, that is,
in the context of the callback from SDL audio.

That said, there are low latency audio subsystems that use multiple
threads or even processes to great effect (http://jackaudio.org/),
but getting that to work reliably can be hard, if at all possible, on
some platforms. “Don’t try this at home!” :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Try the new Live Search today!
http://imagine-windowslive.com/minisites/searchlaunch/?locale=en-us&FORM=WLMTAG

Hello Louis,

Thursday, September 21, 2006, 7:23:47 PM, you wrote:

I guess when browsing through the SDL examples I was under the impression
that most SDL users who use its sound capability already have their wav
files (or whatever the format their audio is in) and just play this back and
maybe add some effects or something. This is why I wasn’t sure if I could
use SDL to feed in my very own synthesized sound (i.e. simply a binary array
of short int’s or something) and have it play on-the-fly (i.e. soon after
each sound chunk is processed) and in real-time.

SDL itself handles audio with a callback that requests for a buffer of
some size (usually a small one) be filled with audio data
concurrently. Extra stuff such as playing wav, streaming etc is
handled by SDL_mixer which sits on top of this.

Since I’m not doing game programming, but rather I am making a simulation
program, I was considering OpenAL - which is a library just for audio. Does
anyone know if OpenAL might be more advantageous/disadvantageous for what
I’m trying to do?

I’m not sure about this. OpenAL’s streaming allows you to calculate
buffers and queue them for playback, but if what you want is
realtime, this is totally inappropriate as it has no settings to
control how large the mix buffer is, or any notification of where
playback is (other than when it’s done with a set of queued buffers).

If latency is of the utmost importance, you might want to consider
looking at PortAudio… especially as on platforms that support it
(such as Windows and Mac), this supports ASIO.–
Best regards,
Peter mailto:@Peter_Mulholland

And for linux one could use jack.On Thursday 21 September 2006 23:11, Peter Mulholland wrote:

Hello Louis,

Thursday, September 21, 2006, 7:23:47 PM, you wrote:

If latency is of the utmost importance, you might want to consider
looking at PortAudio… especially as on platforms that support it
(such as Windows and Mac), this supports ASIO.