[…]
now on, after everything you’ve said. I was thinking of maybe trying
procedural sounds in the future, but maybe SDL_Mixer is still the way
to go with that too.
Could probably be done, but I’m not sure it’s the optimal solution. (You
need to stay in sync with playback, unless you’re just going to prepare
one-shot sounds right before playing them.)
I don’t even have any idea what a procedural
sound would be like, so maybe it’s just a crazy idea that isn’t even
practical.
Real time synth? Actually, my engine doesn’t do much of that currently,
and the focus will probably stay on controlling the playback of
prerendered waveforms. Completely real time generated sounds tend to be
very CPU intensive if you want interesting results of reasonably
quality.
[…]
I’m using SDL_Mixer now, btw, and it’s working pretty well. The only
confusion I ran into is what ‘channels’ are. I think there’s 2
different definitions for it. You open the audio device with either 1
or 2 channels (mono or stereo I assume), but then when you play a
chunk, you have to specify a channel, which seems to be something
completely different.
Here’s another, to add to the confusion: My channels are just like MIDI
channels - they represent “contexts” from which you can fire off and
control one or more voices. (This is how normal synths handle chords.)
So I’m assuming it’s more like just a slot
(oscillator pair, whatever).
Yes, I think so. (Don’t know much about SDL_Mixer, actually…)
I looked at the alien example program,
and it opens 2 channels, but uses 3 channels to play waves. Anyway,
whenever I need to play a sound, I just play it with -1 as the channel
so it allocates one itself, and I haven’t had problems yet.
In Kobo Deluxe (which uses my engine) I just play sound effects on one
channel, music on another etc. While played and controlled through one
channel, MIDI’s allocate up to 16 “private” channels that are used for
the actual playing; one for each MIDI channel used. (I’ll support "ports"
as well, eventually, for really phatt arangements.)
Each channel can be configured WRT signal routing, insert effects (reverb
and the like) etc, so this makes it very easy to separate the “master
controls” for music, SFX etc.
Oh, and there are “groups” as well, allowing some aspects of a group of
voices to be controlled from a single place. Confused yet?
It could
be playing a ton of sounds at once in theory, and I’m not sure if
that’s a bad thing or not.
Well, Kobo Deluxe currently has a limit at 32 voices - but the main
problem with the in-game sound effects is that they were originally
programmed to have one monophonic voice each. That is, if you played
"EXPLO1" repeatedly, it would actually just restart, rather than grabbing
a now voice, as is the case now.
In short, if you use dynamic voice/channel allocation, you have to
consider the effect of sound effects hanging around. The good ol’ random
burst of exposions will quickly driver the output into clipping! (Or as
on my engine, force the output limiter to compress the signal, which
causes “pumping” if it’s done too heavily.)
Anyone have advice on practical limits one
should set?
Either keey you sound effects short, or don’t fire off lots of them all
the time. Or; one way or another, may sure you have some kind of control
over how many sound effects are playing at once.
(Refer to the latest Kobo Deluxe snapshots to hear how it should not be
done! I need to tweak all sound FX control code to make it sound right
again.
//David Olofson — Programmer, Reologica Instruments AB
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |
-------------------------------------> http://olofson.net -'On Tuesday 12 March 2002 01:56, Jason Hoffoss wrote: