Hi-
given a sound effect, how would one use SDL/SDL_mixer to play back samples at different points across the left-to-right pan spectrum.
At the moment all I can see is setpanning in sdlmixer, which would seem to require setting specific channels to specific stereo placement points, then playing the sounds you want placed through those specific channels, which seems like an ass-backwards way to do the thing.
Alternatively, has anyone come up with their own code for doing something along the lines of:
playsound(sound, stereo placement, volume)
where stereo placement is from 0 (left) to 180 (right)
?
Hi-
given a sound effect, how would one use SDL/SDL_mixer to play back samples
at different points across the left-to-right pan spectrum.
At the moment all I can see is setpanning in sdlmixer, which would seem to
require setting specific channels to specific stereo placement points, then
playing the sounds you want placed through those specific channels, which
seems like an ass-backwards way to do the thing.
You’ll need some kind of abstraction to manage playing sounds - call
them channels, sound sources or whatever… There are advantages to
separating channels from the actual sounds in non-trivial cases.
If the API is too detailed/complicated, you just wrap it into a custom
API more suitable for your application.
Alternatively, has anyone come up with their own code for doing something
along the lines of:
playsound(sound, stereo placement, volume)
where stereo placement is from 0 (left) to 180 (right)
?
Well, I’ve written a few sound engines; Audiality (sampleplayer +
offline modular synth; used in Kobo Deluxe), an unreleased OpenAL
style engine (nice and… too simple to be of much use to me), and
Audiality 2 (a full realtime modular synth with subsample accurate
scripting).
The old Audiality would do the job, but it’s a mess, and it’s
officially abandoned. (Going to replace the old Kobo Deluxe branch
with A2 probably. Maybe rip the offline synth out for the sound
effects, as it’s very different from A2.) Not recommended.
Audiality 2, combined with SDL_sound or similar (as A2 only deals with
raw audio data at this point) would certainly do the job and then some
- but you’d have to learn a small scripting language to do anything
much with it! It’s more like Csound, or a non-GUI (so far) alternative
to FMOD and Wwise in the making, than it is an alternative to
SDL_mixer.
One could of course implement a simplified API over Audiality 2, with
a built-in script library and stuff… I’ll think about that. (I kind
of have that in Kobo II, but that’s written in EEL.)
Anyway, link: http://audiality.orgOn Mon, Oct 14, 2013 at 11:35 AM, mattbentley wrote:
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’
At the moment all I can see is setpanning in sdlmixer, which would seem
to require setting specific channels to specific stereo placement
points, then playing the sounds you want placed through those specific
channels, which seems like an ass-backwards way to do the thing.
Yeah?
You play a sound on a given channel, like you always do in SDL_mixer,
and you can optionally change the panning of that channel on-the-fly.
What would you like this to do instead?
–ryan.
David Olofson wrote:
You’ll need some kind of abstraction to manage playing sounds - call
them channels, sound sources or whatever… There are advantages to
separating channels from the actual sounds in non-trivial cases.
Hi David,
can you describe the advantages?
Thanks for getting back to me-
Matt
**
David Olofson wrote:
You’ll need some kind of abstraction to manage playing sounds - call
them channels, sound sources or whatever… There are advantages to
separating channels from the actual sounds in non-trivial cases.Hi David,
can you describe the advantages?
If you go beyond short, one-shot sounds, you’re going to need some way of
controlling sounds while they’re playing. In an API that doesn’t have
explicit positional audio support (a listener object that you can move
around etc) you’re even going to have to update the pan positions of each
sound source, unless sounds are very short and/or things move very slowly.
Long or looped sounds are at the very least going to have to be stopped at
some point.
To do things like that, you’ll usually need something that represents each
instance of a playing sound event, as opposed to the physical sounds;
loaded WAVs etc - or you can only play one instance of each sound effect in
any useful fashion.
In SDL_mixer (as I understand it), there is an array of channels, and when
starting a sound, you can either pick a channel explicitly, or say “-1” to
automatically pick the first unused one, in which case you’ll have to use
the returned channel index for further operations.
In Audiality 2, there are no channels in the traditional sense, but rather
a tree of voices and (recursively) subvoices. To play a sound, you start a
"program" on a subvoice of an existing voice. (Maybe not the best
terminology… These voices can also serve as channels, groups, buses or
whatever you want them to be.) You get a voice handle back, that you can
then use for sending messages (user defined via scripting) to the voice.
In a MIDI synth, you use the note pitch as voice ID within a channel.
Aftertouch (PolyPressure) and NoteOff messages rely on that to find the
playing notes they’re supposed to affect. There isn’t typically a physical
voice assigned to each note pitch in a synth, but that’s not something you
have to worry about when sending MIDI events to it.
Three different approaches to controlling sounds while they’re playing, but
the fundamental logic is pretty much the same.On Fri, Oct 18, 2013 at 9:36 AM, mattbentley wrote:
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’
This seems like the appropriate time to mention ALmixer:
http://playcontrol.net/opensource/ALmixer/
It basically is an SDL_mixer like API, but built on top of OpenAL.
Every channel has a 1-to-1 mapping with an OpenAL source which you can
access. While sound is playing, you can call normal OpenAL functions
using those source id’s to access all the positional effects in
OpenAL. And of course you can control the alListener too.
There is a lot of hidden re-construction going on right now to improve
ALmixer for Android (and a little iOS). If you need those platforms
right now, drop me a private note. Otherwise, I think the main repos
have what you need.
Thanks,
Eric–
Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/
Is ALmixer able to pan yet?? When I looked at it a while back, it seemed a bit
ridiculous that it was missing such basic functionality…
Mason________________________________
From: Eric Wing
To: SDL Development List
Sent: Friday, October 18, 2013 12:00 PM
Subject: Re: [SDL] Using SDL/SDL_mixer for simple stereo-placement of sounds
This seems like the appropriate time to mention ALmixer:
http://playcontrol.net/opensource/ALmixer/
It basically is an SDL_mixer like API, but built on top of OpenAL.
Every channel has a 1-to-1 mapping with an OpenAL source which you can
access. While sound is playing, you can call normal OpenAL functions
using those source id’s to access all the positional effects in
OpenAL. And of course you can control the alListener too.
There is a lot of hidden re-construction going on right now to improve
ALmixer for Android (and a little iOS). If you need those platforms
right now, drop me a private note. Otherwise, I think the main repos
have what you need.
Thanks,
Eric
David Olofson wrote:
If you go beyond short, one-shot sounds, you’re going to need some way of controlling sounds while they’re playing. In an API that doesn’t have explicit positional audio support (a listener object that you can move around etc) you’re even going to have to update the pan positions of each sound source, unless sounds are very short and/or things move very slowly. Long or looped sounds are at the very least going to have to be stopped at some point.
This makes sense for longer sounds when moving, but for short one-offs I would’ve expected an api to be able to do a pan-conversion of the sample on the fly to a specific stereo pan, then feed that to a channel -
it’s non-problematic if you’ve got a large number of free channels, but I’m unfamiliar with how many channels modern equipment/drivers supply -
I’m used to the old days of DOS when you had many 16 channels max.
Still, your explanation makes sense and clarifies the approach, thank you-
Eric Wing wrote:
This seems like the appropriate time to mention ALmixer:
http://playcontrol.net/opensource/ALmixer/
Thanks Eric, I’ll stick with SDL for greater-crossplatform portability at the moment, but I’ll keep that in mind for future purposes!
It always could, but you’re right that I never wrote an explicit API
for that. I always assumed that was best done using the real OpenAL
position APIs. I didn’t think that decision was that ridiculous.
Thanks,
EricOn 10/18/13, Mason Wheeler wrote:
Is ALmixer able to pan yet? When I looked at it a while back, it seemed a
bit
ridiculous that it was missing such basic functionality…
–
Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/
Thanks Eric, I’ll stick with SDL for greater-crossplatform portability at
the moment, but I’ll keep that in mind for future purposes!
Thanks. Though ALmixer works everywhere OpenAL works which is a lot of
platforms. (I think I remember a patch (by Ryan?) that offered OpenAL
an SDL backend which means OpenAL could go everywhere SDL goes. The
patch was not officially incorporated in case SDL tried to be
implemented on top of OpenAL which would lead into a vicious circle.)
Thanks,
Eric–
Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/
[…]
**
This makes sense for longer sounds when moving, but for short one-offs I
would’ve expected an api to be able to do a pan-conversion of the sample on
the fly to a specific stereo pan, then feed that to a channel -
it’s non-problematic if you’ve got a large number of free channels, but
I’m unfamiliar with how many channels modern equipment/drivers supply -
I’m used to the old days of DOS when you had many 16 channels max.
Still, your explanation makes sense and clarifies the approach, thank you-
Hardware channel count isn’t generally an issue these days. Few sound cards
actually have hardware mixing anyway, so many (most these days?) engines
are software-only. Faster CPUs with multiple cores pretty much eliminate
the motivation to use hardware mixing, while the restrictions imposed on
signal routing and effects is a strong motivation for staying away from it.
Anyway, if you still run out of channels (CPU load could still be an issue,
especially on mobile devices), there is always the "virtual voice"
approach, where the engine actually only mixes voices that are currently
audible, or in case of overload, the most important voices.On Sat, Oct 19, 2013 at 12:19 AM, mattbentley wrote:
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’
twitch Nightmares of GGI backending on to other things ultimately
rendering to itself?
The notion that SDL might backend to OpenAL doesn’t make sense to me,
though. SDL is kind of limited to stereo audio, and OpenAL was at
least intended to be 5.1/7.1/42.1/whatever ready.
I’d remember that Creative had pretty much taken their ball and gone
home with OpenAL. I’m glad that it has kind of survived that, but
when I last looked at it the “unCreative” version was pretty much
limited to stereo.
Of course some of what I’ve read about (badly) emulating ALSA on top
of PulseAudio on top of this on top of that and ultimately ending up
back on ALSA as a stuttery, laggy mess isn’t inspiring about the
state of audio on open source platforms at all. Too many layers
all trying (and from our perspective as well as many others, failing)
to emulate one another.
JosephOn Fri, Oct 18, 2013 at 08:38:26PM -0700, Eric Wing wrote:
Thanks Eric, I’ll stick with SDL for greater-crossplatform portability at
the moment, but I’ll keep that in mind for future purposes!Thanks. Though ALmixer works everywhere OpenAL works which is a lot of
platforms. (I think I remember a patch (by Ryan?) that offered OpenAL
an SDL backend which means OpenAL could go everywhere SDL goes. The
patch was not officially incorporated in case SDL tried to be
implemented on top of OpenAL which would lead into a vicious circle.)
Thanks. Though ALmixer works everywhere OpenAL works which is a lot of
platforms. (I think I remember a patch (by Ryan?) that offered OpenAL
an SDL backend which means OpenAL could go everywhere SDL goes. The
patch was not officially incorporated in case SDL tried to be
implemented on top of OpenAL which would lead into a vicious circle.)twitch Nightmares of GGI backending on to other things ultimately
rendering to itself?The notion that SDL might backend to OpenAL doesn’t make sense to me,
though.
The ill-fated WebOS adopted SDL as its official public multimedia API.
Thus it’s native audio API was SDL. They did not provide OpenAL.–
Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/
Like I said, SDL -> OpenAL doesn’t make sense to me. OpenAL -> SDL
might.
Regarding WebOS otherwise? FACEPALM
JosephOn Sat, Oct 19, 2013 at 02:54:38PM -0700, Eric Wing wrote:
Thanks. Though ALmixer works everywhere OpenAL works which is a lot of
platforms. (I think I remember a patch (by Ryan?) that offered OpenAL
an SDL backend which means OpenAL could go everywhere SDL goes. The
patch was not officially incorporated in case SDL tried to be
implemented on top of OpenAL which would lead into a vicious circle.)twitch Nightmares of GGI backending on to other things ultimately
rendering to itself?The notion that SDL might backend to OpenAL doesn’t make sense to me,
though.The ill-fated WebOS adopted SDL as its official public multimedia API.
Thus it’s native audio API was SDL. They did not provide OpenAL.
The notion that SDL might backend to OpenAL doesn’t make sense to me,
though. SDL is kind of limited to stereo audio
Not true in modern times.
That being said, the SDL -> OpenAL patch was because every Linux backend
had subtle quirks, and every toolkit that talked to them had better and
worse luck with with each backend. At the time, SDL seemed to do better
than OpenAL.
In modern times, OpenAL-Soft does fine, so I wouldn’t bother messing
with the SDL patch for it.
I last looked at it the “unCreative” version was pretty much limited to
stereo.
Also not true in modern times.
–ryan.
Like I said, SDL -> OpenAL doesn’t make sense to me. OpenAL -> SDL might.
Oh, sorry if that wasn’t clear: the patch made OpenAL output through
SDL. So OpenAL does all the mixing and interfacing to the app, and uses
SDL to figure out how to talk to hardware.
–ryan.
David Olofson wrote:
Anyway, if you still run out of channels (CPU load could still be an issue, especially on mobile devices), there is always the “virtual voice” approach, where the engine actually only mixes voices that are currently audible, or in case of overload, the most important voices.
Thanks David, could you describe this approach more? I mean, how would this approach apply in the framework of sdl_mixer?
I would assume that channels that aren’t being played back on are inherently not mixed anyway?
[…]
Thanks David, could you describe this approach more? I mean, how would
this approach apply in the framework of sdl_mixer?
I would assume that channels that aren’t being played back on are
inherently not mixed anyway?
What you do is separate the API from the physical voices, so that the
application can deal with virtual voices without worrying about when and
how actual audio mixing is performed. When CPU or physical voice count
limits are hit, the engine decides which virtual voices are most important
and wires only those to physical voices.
What you could do on the application side is to simply not start sounds
that are too quiet. More sophisticated approaches (tracking playback
position of silent voices, reevaluating, bringing silent voices back in
etc) really belong on the engine side. I’m not sure if any non-commercial
engine does this. It’s pretty hairy stuff to get right - and unnecessary,
unless you’re going to throw every sound source of an AAA 3D level in there
and have the engine sort it out automatically. A modern PC will happily mix
hundreds voices on a single CPU core, so you may want to investigate if
there actually is any problem whatsoever before diving into tricky
optimizations.On Mon, Oct 21, 2013 at 12:03 AM, mattbentley wrote:
–
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’
David Olofson wrote:
What you do is separate the API from the physical voices, so that the application can deal with virtual voices without worrying about when and how actual audio mixing is performed. When CPU or physical voice count limits are hit, the engine decides which virtual voices are most important and wires only those to physical voices.
What you could do on the application side is to simply not start sounds that are too quiet. More sophisticated approaches (tracking playback position of silent voices, reevaluating, bringing silent voices back in etc) really belong on the engine side. I’m not sure if any non-commercial engine does this. It’s pretty hairy stuff to get right - and unnecessary, unless you’re going to throw every sound source of an AAA 3D level in there and have the engine sort it out automatically. A modern PC will happily mix hundreds voices on a single CPU core, so you may want to investigate if there actually is any problem whatsoever before diving into tricky optimizations.
Gotcha - I think in my place it would be unnecessary, but it’s interesting to me to understand how these things work - I can see it being extremely relevant in something like l4d2, with hundreds of voices.
It’s a little like psychoacoustic principles in lossy audio formats- removing the data least likely to be heard - curious.
Older games frequently cull sounds that are too quiet to be heard.
Though I recall that many did so at sound trigger time, rather than
at mix time. The effect shows in older Id 3D titles (which use a
virtually unchanged sound engine from at least Doom through Quake 2
and possibly Quake 3 without OpenAL) in that sounds triggered too far
away to hear are never played, even if you wind up standing on top of
the sound generating source (such as a lift).
Back then I barely understood sound mixing, and I actually didn’t
understand the code I was looking at, so I never had the guts to try
and fix that. Back then, I never even went and collapsed the
pointless mixing into 16 virtual stereo channels for dos and then
mixing those into two for SDL. I just took the existing OSS code
which already did that and ported it to SDL.
If there was some intarweb tutorial back then on sound generation and
mixing that could’ve been applied to SDL, I didn’t have it. Maybe
one of these days I’ll have the time and patience to write one, but
I’m not even going to put it on my todo list at this point.
JosephOn Mon, Oct 21, 2013 at 12:40:50AM +0200, David Olofson wrote:
What you do is separate the API from the physical voices, so that the
application can deal with virtual voices without worrying about when and
how actual audio mixing is performed. When CPU or physical voice count
limits are hit, the engine decides which virtual voices are most important
and wires only those to physical voices.What you could do on the application side is to simply not start sounds
that are too quiet. More sophisticated approaches (tracking playback
position of silent voices, reevaluating, bringing silent voices back in
etc) really belong on the engine side. I’m not sure if any non-commercial
engine does this. It’s pretty hairy stuff to get right - and unnecessary,
unless you’re going to throw every sound source of an AAA 3D level in there
and have the engine sort it out automatically. A modern PC will happily mix
hundreds voices on a single CPU core, so you may want to investigate if
there actually is any problem whatsoever before diving into tricky
optimizations.