New to SDL_Mixer (revving engine sound)

Hello,

I’m beggining to use SDL_Mixer to add sounds to my program. It is a vehicle
simulator, and I need to emit the engine sound when it is running, and alter
it depending on the speed of the engine (by measuring its rpms).

I have found on the archives if this list several posts about this, but I
really couldn’t find any in which an implementation was written. For now,
linear interpolation of my samples is fine to me; I’m most concerned about
how to send that data to the soundcard than about sound quality.

One of the problems that first come to my mind when using the Special Effects
API from SDL_Mixer is the fact that the interpolated data lenght will
usually be different from the original sample data length. That makes things
difficult when implementing an effect function. In fact, there is a comment
about this in SDL_mixer.h from Ryan:

/*

  • !!! FIXME : Haven’t implemented, since the effect goes past the
  •          end of the sound buffer. Will have to think about this.
    
  •           --ryan.
    

*/

How this effect is commonly implemented? Is it possible to do it through a
SDL_Mixer’s effect function?

Thank you,

Alberto

[…]

One of the problems that first come to my mind when using the
Special Effects API from SDL_Mixer is the fact that the
interpolated data lenght will usually be different from the original
sample data length.

…and what’s worse; if you’re trying to play faster than the original
sample rate, you’ll run out of data. And of course, if you keep
playing slower for a long time, your effect will run out of delay
buffer space. (Or, if you violate the rules a bit and allocate memory
dynamically, you’ll eventually run out of memory.)

That makes things difficult when implementing an
effect function. In fact, there is a comment about this in
SDL_mixer.h from Ryan:

/*

  • !!! FIXME : Haven’t implemented, since the effect goes past the
  •          end of the sound buffer. Will have to think about
    
  •          this. 
    
  •           --ryan.
    

*/

That’s a different problem, actually, although slightly related. The
issue with reverb and delay effects is that they have “tails”.
(Actually, so do most effects, like filters, or any properly
bandlimited amplitude modulators, though these generally have very
short tails, compared to a reverb effect.) That is, they need to
generate output after the end of the input stream - whereas
apparently, SDL_mixer just stops playing and kills the channel when
the end of the chunk is reached.

For reverbs and the like to work, the API needs to be extended so that
effects can tell the mixer when they’re done, instead of just the
other way around.

Anyway, this extension would not solve your problem, as you’d still
need a time machine and infinite amounts of buffer memory to do it
with this kind of interface. :wink:

How this effect is commonly implemented?

Usually, the resampling is done by the first stage in the chain; the
source. (Also commonly called generator or oscillator.) This first
stage doesn’t have an input from the engine point of view, but rather
manages any input data (like samples, compressed spectral streams or
whatever) internally, possibly by means of some shared resource
manager (like a sample bank, or a pre-caching read-ahead
direct-from-disk streaming engine).

Is it possible to do it through a SDL_Mixer’s effect function?

Well, I’m not very familiar with SDL_mixer, but the API header
strongly suggests that only effects are supported - not sources. That
is, you’ll be called at regular intervals, and you’re supposed to do
some stuff with a buffer; N samples in, N samples out.

Maybe you can cheat the system by creating an “effect” that is
actually a source; ie it ignores the input and just generates
output…? (Just feed it a dummy looped chunk, to keep the channel
alive.) This “effect” would need to manage it’s own samples, and
would have to implement resampling and looping.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Wednesday 17 January 2007 10:02, Alberto Luaces wrote:

Hello David,

thank you for your quick reply. (See below)

El Mi?rcoles, 17 de Enero de 2007 10:45, David Olofson escribi?:

One of the problems that first come to my mind when using the
Special Effects API from SDL_Mixer ?is the fact that the
interpolated data lenght will usually be different from the original
sample data length.

…and what’s worse; if you’re trying to play faster than the original
sample rate, you’ll run out of data. And of course, if you keep
playing slower for a long time, your effect will run out of delay
buffer space. (Or, if you violate the rules a bit and allocate memory
dynamically, you’ll eventually run out of memory.)

Ok. So the mixer callbacks won’t be of much help here.

How this effect is commonly implemented?

Usually, the resampling is done by the first stage in the chain; the
source. (Also commonly called generator or oscillator.) This first
stage doesn’t have an input from the engine point of view, but rather
manages any input data (like samples, compressed spectral streams or
whatever) internally, possibly by means of some shared resource
manager (like a sample bank, or a pre-caching read-ahead
direct-from-disk streaming engine).

Good idea. So I can generate the resampled sound, create a Mix_Chunk with it
and then play it. However, as the sample could be several seconds long, I
must ensure that the new resampled data position is the same as the old one,
in order to not hear the sound “restarting” or “clicks” between samples when
I change from one to another. Bad news is that I can’t find a function in
SDL_mixer that can position the sound at a given offset from the beginning of
the sample – it’s only available for music.

An alternative would be to generate the resampled left part and when it has
finished playing, loop the whole resample until the revolutions change again.
Is this a reasonable approach or maybe it’s too complicated?

Alberto

[…]

Ok. So the mixer callbacks won’t be of much help here.

Well, not the way they’re supposed to be used, but you could probably
get away with implementing a resampling wavetable “oscillator”, the
way I suggested; by implementing an effect that ignores the input and
just generates output.

How this effect is commonly implemented?

Usually, the resampling is done by the first stage in the chain;
the
source. (Also commonly called generator or oscillator.) This first
stage doesn’t have an input from the engine point of view, but
rather
manages any input data (like samples, compressed spectral streams
or
whatever) internally, possibly by means of some shared resource
manager (like a sample bank, or a pre-caching read-ahead
direct-from-disk streaming engine).

Good idea. So I can generate the resampled sound, create a Mix_Chunk
with it and then play it. However, as the sample could be several
seconds long, I must ensure that the new resampled data position is
the same as the old one, in order to not hear the sound "restarting"
or “clicks” between samples when I change from one to another. Bad
news is that I can’t find a function in SDL_mixer that can position
the sound at a given offset from the beginning of the sample – it’s
only available for music.

To completely eliminate clicks, such a function would have to be
better than sample accurate… (Even slight, sub sample phase jumps
result in audible “thumps”, at least with relatively pure sounds such
as plain geometric waveforms.)

An alternative would be to generate the resampled left part and when
it has finished playing, loop the whole resample until the
revolutions change again.
Is this a reasonable approach or maybe it’s too complicated?

Well, it sounds a bit complicated, but more importantly, I don’t think
it’s going to work. You really need continous real time pitch control
for something like this. Besides, if you’re going to implement the
resampling anyway, why not do it in the right place instead? :slight_smile:

For absolute maximum quality, at least with fast modulation, you
should preferably update the resampling ratio for every single sample
generated, as well as bandlimiting the frequency control input - but
in this application, you might get away with updating the ratio once
per buffer. Then again, for something like an F1 engine, that revs
from idle to redline in a fraction of a second, you’ll probably have
to update at least a few hundred times per second, to avoid annoying
amounts of pitch “stair-stepping”.

Oh, and pitch isn’t all there is to it, BTW. For a realistic sound,
you also need to simulate the effect that the throttle has on the
sound. Regardless of revs, the sound should become louder and
more “open” as the throttle opens. You might also want to simulate
the bangs caused by the rev limter and/or traction control killing
the ignition for short moments, at least if you’re dealing with race
engines. (Standard cars usually implement these things in a slower
but more friendly way, by closing the throttle instead of killing the
ignition.)

Anyway, interesting problem - and it’s becoming a bit of an FAQ. Maybe
it’s time for another little SDL programming example? :slight_smile:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Wednesday 17 January 2007 13:31, Alberto Luaces wrote:

El Mi?rcoles, 17 de Enero de 2007 14:14, David Olofson escribi?:

Ok. So the mixer callbacks won’t be of much help here.

Well, not the way they’re supposed to be used, but you could probably
get away with implementing a resampling wavetable “oscillator”, the
way I suggested; by implementing an effect that ignores the input and
just generates output.

I see. The first time you suggested that I thought that this could be hard to
implement, so I decided to leave mix effects. Now I realize that maybe this
is the simpler way, because inside of the callback function I can keep track
of which part of the sound is being played and what is needed for the next
buffer.

One thing to note is that I am not generating my waveform, but altering a
digitized preexistent bank of sounds, so the code would be something as:

struct soundInfo
{
int rpms;
int numSample, soundSampleLenght;
/* the number of samples played and the total of samples for that sound will
help us to know what is to play next /
int bankNumber;
/
which sound of our bank of sounds is being played for the present
frequency. If the frequency range falls out of this sound, we’ll start
playing the sound above or below in the bank */
};

void effect_function( int channel, void *stream, int len, void *udata)
{
struct soundInfo *p = (struct soundInfo *) data;

/* check p->rpms to know which sound fits best into that frequency */
/* from our bank of sounds */

/* Interpolate "len" number of samples from the current stream position */
/* and write then into "stream" */

}

That “current stream position” would be scaled by the pitch to match the
position of the unmolested sound and get the real offset from the beggining
of it, so I can control what part is to be feeded in each call.

For absolute maximum quality, at least with fast modulation, you
should preferably update the resampling ratio for every single sample
generated, as well as bandlimiting the frequency control input - but
in this application, you might get away with updating the ratio once
per buffer. Then again, for something like an F1 engine, that revs
from idle to redline in a fraction of a second, you’ll probably have
to update at least a few hundred times per second, to avoid annoying
amounts of pitch “stair-stepping”.
Oh, and pitch isn’t all there is to it, BTW. For a realistic sound,
you also need to simulate the effect that the throttle has on the
sound. Regardless of revs, the sound should become louder and
more “open” as the throttle opens. You might also want to simulate
the bangs caused by the rev limter and/or traction control killing
the ignition for short moments, at least if you’re dealing with race
engines. (Standard cars usually implement these things in a slower
but more friendly way, by closing the throttle instead of killing the
ignition.)

Thank you for that explanation. It is very interesting. However, I hope I have
not got to implement my sound routines down to that level! :slight_smile:

Anyway, interesting problem - and it’s becoming a bit of an FAQ. Maybe
it’s time for another little SDL programming example? :slight_smile:

Thank you very much for your help, David. I was also surprised that this topic
has been discussed so many times, but at least in the posts I have found, the
actual implementation of the sound stream feeding wasn’t shown. I hope then
that this post’s pseudocode could act as a temporary miniFAQ to this
question.

However I have to test it first! :wink: I’ll write when I have got it done.

Alberto

El Mi?rcoles, 17 de Enero de 2007 14:14, David Olofson escribi?:

Ok. So the mixer callbacks won’t be of much help here.

Well, not the way they’re supposed to be used, but you could
probably get away with implementing a resampling
wavetable “oscillator”, the way I suggested; by implementing an
effect that ignores the input and just generates output.

I see. The first time you suggested that I thought that this could
be hard to implement, so I decided to leave mix effects. Now I
realize that maybe this is the simpler way, because inside of the
callback function I can keep track of which part of the sound is
being played and what is needed for the next buffer.

Exactly. :slight_smile: And, besides, this is how your average synth oscillator
is implemented. Apart from stepping outside the official semantics of
the SDL_mixer API, it’s actually the correct and straightforward way
of doing it.

One thing to note is that I am not generating my waveform, but
altering a digitized preexistent bank of sounds,

Well, that’s the usual way.

(I think actual real time synthesis of engine sounds - and musical
instruments and other sounds, for that matter - pretty much died as
the 6581, AY*/YM*, OPL* and similar chips went out of fashion.)

so the code would be something as:

struct soundInfo
{
int rpms;
int numSample, soundSampleLenght;
/* the number of samples played and the total of samples for that
sound will
help us to know what is to play next /
int bankNumber;
/
which sound of our bank of sounds is being played for the present
frequency. If the frequency range falls out of this sound, we’ll
start
playing the sound above or below in the bank */
};

void effect_function( int channel, void *stream, int len, void
*udata)
{
struct soundInfo *p = (struct soundInfo *) data;

/* check p->rpms to know which sound fits best into that frequency
/
/
from our bank of sounds */

/* Interpolate “len” number of samples from the current stream
position /
/
and write then into “stream” */

}

That “current stream position” would be scaled by the pitch to match
the position of the unmolested sound and get the real offset from
the beggining of it, so I can control what part is to be feeded in
each call.

Keep in mind that the stream position should be fractional, so you
don’t get annoying buzzing or humming at the buffer rate.

Oh, and looping needs to be handled that way too; reset only the
integer sample position - not the fracional position.

…or you can just rip the “voice mixer” code from Audiality. :wink:

It supports i big matrix of sample formats, mono/stereo, various
interpolation methods and oversampling, but since you seem to have a
bank of waveforms (which can be bandlimited and resampled in a sound
editor, or at load time), you only really need one version. Anything
but cubic interpolation is pointless unless you want to run on an old
Pentium (memory access costs more than the interpolation), and the
oversampling isn’t needed if you mip-map the waveforms. (Well, 2x, if
you really want the treble flat up to Nyqvist, but for all practical
matters, you can usually push cubic interpolation up to 1.5 * fs or
so without too much aliasing.)

[…]

Thank you for that explanation. It is very interesting. However, I
hope I have not got to implement my sound routines down to that
level! :slight_smile:

Well, if you have a bank of waveforms, all you need is some nice
cross-fading - but of course, you still need to figure out when and
how to use each waveform from the bank. :slight_smile:

Anyway, interesting problem - and it’s becoming a bit of an FAQ.
Maybe it’s time for another little SDL programming example? :slight_smile:

Thank you very much for your help, David. I was also surprised that
this topic has been discussed so many times, but at least in the
posts I have found, the actual implementation of the sound stream
feeding wasn’t shown. I hope then that this post’s pseudocode could
act as a temporary miniFAQ to this question.

Well, that’s why I’ve been thinking about coding an example; I still
haven’t seen one, despite people asking this question over and over.
Apparently, a sound effects mixer without pitch control and stuff
just doesn’t cut it these days…

However I have to test it first! :wink: I’ll write when I have got it
done.

Yeah, I think it would be nice to have some “canned code”,
demonstrating how to do it with SDL_mixer, as SDL_mixer seems to do
what most people need, apart from this “little detail” that is sound
FX pitch control.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Wednesday 17 January 2007 15:53, Alberto Luaces wrote:

Alberto Luaces <aluaces udc.es> writes:

El Mi?rcoles, 17 de Enero de 2007 14:14, David Olofson escribi?:

Ok. So the mixer callbacks won’t be of much help here.

Well, not the way they’re supposed to be used, but you could probably
get away with implementing a resampling wavetable “oscillator”, the
way I suggested; by implementing an effect that ignores the input and
just generates output.

I see. The first time you suggested that I thought that this could be hard to
implement, so I decided to leave mix effects. Now I realize that maybe this
is the simpler way, because inside of the callback function I can keep track
of which part of the sound is being played and what is needed for the next
buffer.

Pitch shifting of samples is supported by the MikMod library. Internally
SDL_mixer uses MikMod to play music modules but not for the sound effects. If
you use MikMod directly you may be able to shift your sample using the mixing
routines it comes with. The only disadvantage to this approach is that you
can’t use Ogg Vorbis files for your samples. If you’re using Wave files instead
then you’ll be in fine shape using MikMod.

Hello David,

El Mi?rcoles, 17 de Enero de 2007 16:30, David Olofson escribi?:

Keep in mind that the stream position should be fractional, so you
don’t get annoying buzzing or humming at the buffer rate.

Oh, and looping needs to be handled that way too; reset only the
integer sample position - not the fracional position.

…or you can just rip the “voice mixer” code from Audiality. :wink:

It supports i big matrix of sample formats, mono/stereo, various
interpolation methods and oversampling, but since you seem to have a
bank of waveforms (which can be bandlimited and resampled in a sound
editor, or at load time), you only really need one version. Anything
but cubic interpolation is pointless unless you want to run on an old
Pentium (memory access costs more than the interpolation), and the
oversampling isn’t needed if you mip-map the waveforms. (Well, 2x, if
you really want the treble flat up to Nyqvist, but for all practical
matters, you can usually push cubic interpolation up to 1.5 * fs or
so without too much aliasing.)

Thank you very much for those tips. I’ll take a look to see how it is
implemented in Audality.

From your advices about avoiding the aliasing below 1.5 * fs I think I will
need about 6 waveforms to cover the 1,000 - 8,000 rpm range.

Yeah, I think it would be nice to have some “canned code”,
demonstrating how to do it with SDL_mixer, as SDL_mixer seems to do
what most people need, apart from this “little detail” that is sound
FX pitch control.

Heh, little detail :wink: . Racing games, flight simulators can reach 30% out of
the total. Everything carrires a engine nowadays :slight_smile:

Hi Samuel,

El Mi?rcoles, 17 de Enero de 2007 19:58, Samuel Crow escribi?:

Pitch shifting of samples is supported by the MikMod library. ?Internally
SDL_mixer uses MikMod to play music modules but not for the sound effects.
?If you use MikMod directly you may be able to shift your sample using the
mixing routines it comes with. ?The only disadvantage to this approach is
that you can’t use Ogg Vorbis files for your samples. ?If you’re using Wave
files instead then you’ll be in fine shape using MikMod.

I’m ok with only using wav files, in fact this is what I’m doing at the
moment. The only gotcha is that I wanted to keep the dependencies of my
project to a minimum. Although I know MikMod is “highly” cross-platform, I’d
prefer not to deal with its setup on every system my program runs on. For
Linux is very easy with the aid of the packaging systems, but on Windows
things change.

However if I fail to implement the effect I want, certainly this could be a
great solution…

Thanks,

Alberto

[…]

From your advices about avoiding the aliasing below 1.5 * fs I
think I will
need about 6 waveforms to cover the 1,000 - 8,000 rpm range.

Well, I suspect there may not be much information to lose in the
treble anyway, so you may not need to keep the waveform sample rate
at or above 44.1 kHz all the time. If you play each waveform from 32
through 64 kHz (assuming a 44.1 kHz output sample rate), you’ll lose
some high treble (16+ kHz) sometimes, but chances are there isn’t any
real information in that frequency range anyway.

For comparison, Quake 3, RTCW and many other games use 22 kHz samples
and 22 kHz output, maximum… That is, no sound at all above 11 kHz.

Yeah, I think it would be nice to have some “canned code”,
demonstrating how to do it with SDL_mixer, as SDL_mixer seems to
do what most people need, apart from this “little detail” that is
sound FX pitch control.

Heh, little detail :wink: . Racing games, flight simulators can reach
30% out of the total. Everything carrires a engine nowadays :slight_smile:

Yeah, I know… And engine sounds aren’t the only ones that could use
some pitch control and other stuff.

What are these projects using? I know some use OpenAL… Is there
anything that can be considered an “official” solution for use with
SDL?

Maybe it’s time for a new SDL add-on library? Some requirements:
* Generic mixer with pitch, pan, volume, effects etc
* Small, simple, easy to use
* No dependencies, except possibly SDL
* No differentiation between music and sounds
* Should run well on integer-only CPUs
* High quality resampling for faster CPUs
* Implementation and API language: C
* Should play nice with SDL
* Should play nice with SDL_sound

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 11:37, Alberto Luaces wrote:

Offhand I can’t think of anyone more qualified to do it than you. Since F/OSS
is a meritocracy, you want to volunteer?

JeffOn Thu January 18 2007 04:02, David Olofson wrote:

Maybe it’s time for a new SDL add-on library? Some requirements:

  • Generic mixer with pitch, pan, volume, effects etc
  • Small, simple, easy to use
  • No dependencies, except possibly SDL
  • No differentiation between music and sounds
  • Should run well on integer-only CPUs
  • High quality resampling for faster CPUs
  • Implementation and API language: C
  • Should play nice with SDL
  • Should play nice with SDL_sound

Hello !

Maybe it’s time for a new SDL add-on library? Some requirements:

  • Generic mixer with pitch, pan, volume, effects etc
  • Small, simple, easy to use
  • No dependencies, except possibly SDL
  • No differentiation between music and sounds
  • Should run well on integer-only CPUs
  • High quality resampling for faster CPUs
  • Implementation and API language: C
  • Should play nice with SDL
  • Should play nice with SDL_sound

Offhand I can’t think of anyone more qualified to do it than you. Since
F/OSS
is a meritocracy, you want to volunteer?

When i read David’s emails about sound,
i always think only a few people know
more about sound then he does.

CU

Well, I’ve been thinking about doing something like this for quite
some time… Most of the low level code can be found in the old
Audiality engine. (Resamplers, a mixer, some effects…)

Of course, an engine synthesizer would make a handy and useful example
program. :wink:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 15:09, Jeff wrote:

On Thu January 18 2007 04:02, David Olofson wrote:

Maybe it’s time for a new SDL add-on library? Some requirements:

  • Generic mixer with pitch, pan, volume, effects etc
  • Small, simple, easy to use
  • No dependencies, except possibly SDL
  • No differentiation between music and sounds
  • Should run well on integer-only CPUs
  • High quality resampling for faster CPUs
  • Implementation and API language: C
  • Should play nice with SDL
  • Should play nice with SDL_sound

Offhand I can’t think of anyone more qualified to do it than you.
Since F/OSS is a meritocracy, you want to volunteer?

[…]

Offhand I can’t think of anyone more qualified to do it than you.
Since F/OSS is a meritocracy, you want to volunteer?

When i read David’s emails about sound,
i always think only a few people know
more about sound then he does.

Oh, I believe there are quite a few over at the linux-audio-dev and
music-dsp mailing lists. :smiley:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 15:12, Torsten Giebl wrote:

David Olofson <david olofson.net> writes:

What are these projects using? I know some use OpenAL… Is there
anything that can be considered an “official” solution for use with
SDL?

Maybe it’s time for a new SDL add-on library? Some requirements:

  • Generic mixer with pitch, pan, volume, effects etc
  • Small, simple, easy to use
  • No dependencies, except possibly SDL
  • No differentiation between music and sounds
  • Should run well on integer-only CPUs
  • High quality resampling for faster CPUs
  • Implementation and API language: C
  • Should play nice with SDL
  • Should play nice with SDL_sound

libMikMod does all of these things and is used internally by both SDL_sound and
SDL_mixer. Why not just use it instead? I think there was even an experimental
build of SDL_mixer that linked in MikMod.so directly instead of just the subset
that was pasted into the SDL_mixer source code.

David Olofson <david olofson.net> writes:

What are these projects using? I know some use OpenAL… Is there
anything that can be considered an “official” solution for use
with SDL?

Maybe it’s time for a new SDL add-on library? Some requirements:
* Generic mixer with pitch, pan, volume, effects etc
* Small, simple, easy to use
* No dependencies, except possibly SDL
* No differentiation between music and sounds
* Should run well on integer-only CPUs
* High quality resampling for faster CPUs
* Implementation and API language: C
* Should play nice with SDL
* Should play nice with SDL_sound

libMikMod does all of these things and is used internally by both
SDL_sound and SDL_mixer. Why not just use it instead?

Well, I don’t know… Size? API complexity? I’ll look into it…

I think there was even an experimental build of SDL_mixer that
linked in MikMod.so directly instead of just the subset
that was pasted into the SDL_mixer source code.

It strikes me as a bit odd to use only parts of something that is a
real library anyway - especially on platforms where it builds as a
shared library. Why is it done this way in SDL_mixer?

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 17:06, Samuel Crow wrote:

Hello !

It strikes me as a bit odd to use only parts of something that is a
real library anyway - especially on platforms where it builds as a shared
library. Why is it done this way in SDL_mixer?

Just a guess, but i think that when the idea of SDL_mixer was born,
libmikmod was not really usable on all plattforms SDL supported.
The way, directly adding it to SDL_mixer, allowed to make changes easier.

CU

Makes sense…

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 17:51, Torsten Giebl wrote:

Hello !

It strikes me as a bit odd to use only parts of something that is
a real library anyway - especially on platforms where it builds as
a shared library. Why is it done this way in SDL_mixer?

Just a guess, but i think that when the idea of SDL_mixer was born,
libmikmod was not really usable on all plattforms SDL supported.
The way, directly adding it to SDL_mixer, allowed to make changes
easier.

David Olofson <david olofson.net> writes:
[…]

Maybe it’s time for a new SDL add-on library? Some requirements:
* Generic mixer with pitch, pan, volume, effects etc
* Small, simple, easy to use
* No dependencies, except possibly SDL
* No differentiation between music and sounds
* Should run well on integer-only CPUs
* High quality resampling for faster CPUs
* Implementation and API language: C
* Should play nice with SDL
* Should play nice with SDL_sound

libMikMod does all of these things and is used internally by both
SDL_sound and SDL_mixer. Why not just use it instead?

Looking at the documentation, a few questions pop up:

* Is it still true that only WAV files can be
  used for samples? I suppose it would be
  possible to use SDL_sound for decoding other
  formats to memory, and then having libMikMod
  load it from there...

* It appears that the "Player" is a singleton,
  which would mean that only one module can be
  played at a time. Is there a way around this?
  (For crossfading, using music as sound effects
  and stuff like that.)
     Speaking of which; what, if anything, does
  SDL_sound do about this?

* I can't seem to find a pitch/transpose control
  for the Player. Not exactly essential for
  music (though some games in the 8 bit days
  did transpose the music), but nice if you want
  to use music formats for sound effects.

I suppose one could solve these issues by wrapping SDL_sound and
libMikMod - but then we have a rather substantial set of
dependencies, all of a sudden.

My libmixmod.so (AMD64) is 243 kB and libSDL_sound is 163 kB, and I
guess a standard build of SDL_sound pulls in a few other libs as
well. (Could be dynamically loaded, though; I don’t know…) For
comparison, SDL is 382 kB here, and SDL_mixer is 63 kB.

My idea was more along the lines of providing something very light
weight and modular, with few, if any dependencies. Smaller and
simpler than SDL_mixer, with no explicit music or decoding support at
all. Something that does one thing, and does it well.

There are several other libs (SDL_sound, libsndfile, various codec
libs etc…) that specialize in decoding/playing various types of
files and streams. Just plug one of these in if you need music. The
interesting part is that these would stream through the New Mixer,
giving you pitch control, effects and stuff, just as for sound
effects.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Thursday 18 January 2007 17:06, Samuel Crow wrote:

David Olofson <david olofson.net> writes:

David Olofson <david olofson.net> writes:
[…]

Maybe it’s time for a new SDL add-on library? Some requirements:
* Generic mixer with pitch, pan, volume, effects etc
* Small, simple, easy to use
* No dependencies, except possibly SDL
* No differentiation between music and sounds
* Should run well on integer-only CPUs
* High quality resampling for faster CPUs
* Implementation and API language: C
* Should play nice with SDL
* Should play nice with SDL_sound

libMikMod does all of these things and is used internally by both
SDL_sound and SDL_mixer. Why not just use it instead?

Looking at the documentation, a few questions pop up:

  • Is it still true that only WAV files can be
    used for samples? I suppose it would be
    possible to use SDL_sound for decoding other
    formats to memory, and then having libMikMod
    load it from there…

.WAV is the only filetype that can be used as a sample, but the streamed
formats are entirely another story. Pitch shifting on streamed players is
rather difficult and you might have to resort to system-specific code for
doing that.

  • It appears that the “Player” is a singleton,
    which would mean that only one module can be
    played at a time. Is there a way around this?
    (For crossfading, using music as sound effects
    and stuff like that.)
    Speaking of which; what, if anything, does
    SDL_sound do about this?

SDL_sound wraps both MikMod and ModPlugTracker. It’s possible that
ModPlugTracker’s library might handle this. Not an ideal solution IMHO since
you have two music players loaded in simultaneously.

  • I can’t seem to find a pitch/transpose control
    for the Player. Not exactly essential for
    music (though some games in the 8 bit days
    did transpose the music), but nice if you want
    to use music formats for sound effects.

I think it only will transpose individual samples using the
Voice_SetFrequency() function but I haven’t tried it. Have a look at
http://mikmod.raphnet.net/doc/libmikmod-3.1.10/docs/mikmod.html
for the specific documentation on the Voice_SetFrequency() function. (Just
scroll down from the Sample Functins link.)

My idea was more along the lines of providing something very light
weight and modular, with few, if any dependencies. Smaller and
simpler than SDL_mixer, with no explicit music or decoding support at
all. Something that does one thing, and does it well.

I would certainly welcome such a library becuase I am writing a program that
compiles ProTracker music modules into C source and am having to make it
system-specific to the Amiga-like operating systems that support AHI drivers
because it would be so redundant with SDL_mixer and MikMod around.

There are several other libs (SDL_sound, libsndfile, various codec
libs etc…) that specialize in decoding/playing various types of
files and streams. Just plug one of these in if you need music. The
interesting part is that these would stream through the New Mixer,
giving you pitch control, effects and stuff, just as for sound
effects.

I think this is the problem with getting stuff to work on platform independent
libraries like SDL. I don’t know what Windows is like or ALSA on Linux or Mac.
I use SDL because there is little money to be made on Amiga-like OSs with AHI
drivers.

If you wanted to port the AHI source code it would make my life simpler but I
doubt it would replace ALSA or some of the better supported driver/mixer
formats. The AHI source code is at http://arp2.berlios.de/ahi/ if you’re
interested. (The AROS version is x86, the original AmigaOS 2.x-3.x version is
68000-based, and the AmigaOS 4.0 and MorphOS versions are PowerPC.)

I just thought that I would point out MikMod since it’s portable and LGPL, AHI
is also GPL/LGPL for the current parts respectively. I don’t know how hard it
is to write a wrapper functionity for a sound driver but I’ll be using the AHI
drivers for my Music Compiler nonetheless.

I’ll eagerly await any portable solutions you can come up with. I’m not an
expert on sound but I’m pretty well initiated with how things are done on the
Amiga. Drop me an email or post here and I’ll try to help out any way I can.

–Sam> On Thursday 18 January 2007 17:06, Samuel Crow wrote: