SDL_MixAudioFormat outputs silence but non-mixed audio works fine? (simple example inside)

I’m using SDL2 on macOS but it seems like the SDL_MixAudioFormat isn’t working properly. It doesn’t crash or provide any errors, but the output is silent.

Here is the function:

static void audio_sdl_play(const uint8_t *buf, size_t len) {
   uint8_t *mix_buf[len];
   SDL_memset(mix_buf, 0, len);
   SDL_MixAudioFormat(mix_buf, buf, AUDIO_S16, len, SDL_MIXER_MAXVOLUME);
   SDL_QueueAudio(dev, mix_buf, len);
}

However, if I do not use the mixer it works perfectly fine:

static void audio_sdl_play(const uint8_t *buf, size_t len) {
   SDL_QueueAudio(dev, buf, len);
}

Am I doing something wrong?

I’m seeing compiler warnings but I’m not sure how to fix them and I’m also using similar code to every example I’ve seen online.

Here are the warnings:

src/pc/audio/audio_sdl2.c: In function 'audio_sdl_play':
src/pc/audio/audio_sdl2.c:53:28: warning: passing argument 1 of 'SDL_MixAudioFormat' from incompatible pointer type [-Wincompatible-pointer-types]
   53 |         SDL_MixAudioFormat(mix_buf, buf, AUDIO_S16, len, 128);
      |                            ^~~~~~~
      |                            |
      |                            uint8_t ** {aka unsigned char **}
In file included from /opt/homebrew/include/SDL2/SDL.h:36,
                 from src/pc/audio/audio_sdl2.c:4:
/opt/homebrew/include/SDL2/SDL_audio.h:1178:57: note: expected 'Uint8 *' {aka 'unsigned char *'} but argument is of type 'uint8_t **' {aka 'unsigned char **'}
 1178 | extern DECLSPEC void SDLCALL SDL_MixAudioFormat(Uint8 * dst,
      |                                                 ~~~~~~~~^~~

I bet redundant pointer, I recommend uint8_t mix_buf[len];

Thanks so much. That fixed the compiler warnings but it still doesn’t output any audio.

Do you see anything else wrong?

The compiler warnings are because uint8_t *mix_buf[len] is creating an array of pointers, not a buffer of len size.

Past that, I have no idea. If just directly queueing the audio works, why not do that?

I don’t know, try Uint8 instead of uint8_t

Or let mix_buf be a global variable…

Or SDL_MixAudio instead of SDL_MixAudioFormat

Thanks again so much for the help everybody.

I’ve tried changing to SDL_MixAudio and using Uint8 instead of uint8_t but neither helped. I can try using mix_buf as a global variable but since the length changes I would have to malloc/free the length every time I’m queueing audio, right? (sorry, I’m more of a C# developer than C)

Also, the project I’m trying to fix isn’t mine so I apologize for any vague explanations. However, according to the code the only purpose of the call to SDL_MixAudioFormat is to adjust the volume:

SDL_MixAudioFormat(mix_buf, buf, AUDIO_S16, len, SDL_MIX_MAXVOLUME * configOverallVolume);

Because the code is quite simple I thought it was strange it wasn’t working and was hoping I could figure it out.

Is len in samples or in bytes? SDL expects bytes.

It would be helpful to see what is actually in mix_buf after the call to SDL_MixAudioFormat(), as a test.

What is the value of configOverallVolume?

Wrote a little test app, and it works fine for me with SDL2 on my Mac:

You have to check everything one by one…
E.g. type 128 instead of SDL_MIX_MAXVOLUME and check mix_buf contents after calling SDL_MixAudio

But it’s kinda pointless to use SDL_MixAudio just for volume.