Hi,
To stay compatible I need a sound system that supports:
- playing MIDIs
- playing WAVs at a variable frequency (possibly random)
- at the same time (multiple channels)
Currently I’m using SDL_mixer (MIDI + multiple channels), and I
implemented a work-around to change the WAV samples frequency at run
time. This is done through a empty sound of the right resampled
length, and a SDL_mixer Effect that replaces the empty buffer with the
run-time-resampled WAV.
http://git.savannah.gnu.org/cgit/freedink.git/tree/src/sfx.c
That sounds great! Question: do you find anything negative about your
current solution aside from the hackishness of creating a fake buffer?
So far I didn’t hit any problem, besides doubling the channels’ memory
usage.
I believe that since you are resampling in real-time (that is to say,
you’re doing it inside an “effect” callback,) that you actually do not
need to give the Mix_Chunk any data at all!
I just looked through the source code for SDL_mixer SVN revision 4211.
Here is the function you are currently using to create your fake
Mix_Chunk:
<snip!>
I am pretty positive (spent about 15 minutes looking through the code
step by step) that SDL_mixer will never touch the contents of
Mix_Chunk.abuf if you register your own effect on it.
It looks to me that chunk->abuf is assigned to mix_channel[i].samples,
then .samples is passed to Mix_DoEffects(…, void* snd, …), where
there’s a memcpy(…, snd, …).
I also had crashes when using a growable shared buffer (instead of one
buffer per chunk), so I think abuf is used to bootstrap the effects
chain. Did I miss something maybe?
The main point of using ‘abuf’ though is to define when SDL_Mixer
needs to stop playing, so maybe it’s possible to register small
looping sounds and find a way to stop playing while within the
callback (but I think you shouldn’t call Mix_HaltChannell(…) from
within the callback, this may need to be done from the outside).
This works fine enough, but sadly this increases the memory usage
quite a bit (the empty/useless memory buffer + the sample is
pre-converted to the hardware 16/8 MSB/LSB mono/stereo format to
reduce the number of filter combinations, usually better quality than
the original 8bit/mono sounds). ?As I’m currently porting FreeDink to
the PSP this memory usage becomes critical and I need to get rid of it
Just write your own Mix_Chunk allocator function that just sets abuf
to NULL (or whatever you want, really.) Just be careful to make sure
that you only try to play it with your own “effect!”
I just realized that implementing frequency shift in SDL_mixer would
get rid of the fake buffers, but that SDL_mixer would still
pre-convert the sound to the hardware format (using
SDL_BuildAudioCVT(…)).
Since the PSP platform apparently locks the output format to
44.1kHz/S16LSB, my low-quality sounds would still take a lot of
memory. I also need to keep the audio chunks in their original format
and convert them to hardware format at real-time.
To reduce the number of combinations, the frequency-conversion temp
buffer (chunksize=4096B) could be fixed to S16LSB (or even floats) and
would be converted to the hardware format in a second pass, it
wouldn’t take much memory nor CPU I think.On Sat, Apr 25, 2009 at 06:27:31PM -0400, Donny Viszneki wrote:
On Sat, Apr 25, 2009 at 1:44 PM, Sylvain Beucler <@Sylvain_Beucler> wrote:
–
Sylvain