No sound using SDL audio

Hi all!

It’s been two days I try to use the SDl library to play a wave file, without success. I’ve been browsing Internet looking for code, bugs, advices, documentation, but found nothing that seems to help.

Here is the code:

#include <iostream>
#include <ostream>
#include <stdio.h>

#include <SDL.h>
#include <SDL_audio.h>

#include <wiringPi.h>

using namespace std;

uint32_t audioLen;
uint8_t *audioPos;

void audioCallback(void *userdata, uint8_t *stream, int len){
	if(audioLen == 0) return;

	if(len > audioLen){
		len = audioLen;

	SDL_memset(stream, 0, len);

	SDL_MixAudio(stream, audioPos, len, 100);

	audioPos += len;
	audioLen -= len;

	cout << "writing " << len << " bytes, from ";
	cout << (int)audioPos << " to " << (int)stream << endl;


void testDevices(){
	for (uint8_t i = 0; i < SDL_GetNumAudioDrivers(); ++i) {
		printf("Audio driver %d: %s\n", i, SDL_GetAudioDriver(i));

	cout << "current audio driver is " << SDL_GetCurrentAudioDriver() << endl;

	int nbDevice = SDL_GetNumAudioDevices(0);

	for(int i = 0; i < nbDevice; ++i){
		cout << "device n°" << i << ": ";
		cout << SDL_GetAudioDeviceName(i, 0) << endl;

int main(int argc, char* argv[]){

    if(SDL_Init(SDL_INIT_AUDIO) != 0){
        cout << "unable to init SDL" << endl;
        return 1;

    SDL_AudioSpec want, have;
    uint32_t waveLength;
    uint8_t *waveBuffer;

    SDL_AudioDeviceID device;

	if(SDL_LoadWAV("/home/pi/music/paint.wav", &want, &waveBuffer, &waveLength) == NULL){
		cout << "could not open file" << endl;

	} else {

		want.callback = audioCallback;
		want.userdata = NULL;

		cout << "file informations:" << endl;
		cout << "freq:     " << want.freq << endl;
		cout << "format:   " << want.format << endl;
		cout << "channels: " << (int)want.channels << endl;
		cout << "silence:  " << (int)want.silence << endl;
		cout << "samples:  " << want.samples << endl;
		cout << "size:     " << want.size << endl;

		audioPos = waveBuffer;
		audioLen = waveLength;

		cout << "file is " << waveLength << " samples long" << endl;
		cout << "buffer start is at address " << (uint32_t)waveBuffer << endl;

		device = SDL_OpenAudioDevice(SDL_GetAudioDeviceName(0, 0), 0, &want, &have, SDL_AUDIO_ALLOW_FORMAT_CHANGE);
		cout << "play audio on device n°" << device << endl;

		SDL_PauseAudioDevice(device, 0);

		uint32_t start = millis();
		cout << "start feeding buffer" << endl;
		uint32_t length = waveLength * 4;

			if(millis() - start > 5000) break;



	return 0;

There doesn’t seem to be any error while executing. The file is found, the values read from it are right, and the program execute till end without problem. But there is no sound.

This is intended to run on a Raspberry pi, but the same code has been tested localy on a ubuntu laptop as well. I’ve seen that sometimes you have to force the audio driver by setting the variable SDL_AUDIODRIVER to another value (say pulseaudio, alsa or dsp on Linux). I’ve seen no difference, and when I change this value (I reload the bashrc file each time) the print from the program doesn’t change, and stays on the same audio driver.

There is probably something I’m missing, and I fear it’s a really simple thing. But at this point I cannot figure out what!

Thank you in advance, hope someone here will be able to give an answer.

The main problem is that SDL_MixAudio only works on the audio device with id 1. This is a compatibility device for the old API which could only open one device. You do not open the device through the old API and always get a device id higher than 1. Use SDL_MixAudioFormat instead. Because this requires you to pass the audio format of the source and destination you need to take care of something else (see first point below).

A few things on the rest of your code:

  • SDL_AUDIO_ALLOW_FORMAT_CHANGE is problematic when trying to directly write a WAVE file to the stream. You could end up with a floating-point format and output would be garbled or silent. Pass 0 to force it to use your parameters. Newer versions of SDL (2.0.6 and later, I think) will automatically convert your stream to the format of the audio device. Versions before might throw an error if the device can’t understand the format you want.

  • The callback expects the full length of the buffer to be set. Even if you have nothing to play, you should set the whole buffer to 0 or you might hear loops of previous buffers or noise.

  • Your code prints "file is " << waveLength << " samples long" but SDL_LoadWAV returns the length of the buffer in bytes, not samples.

  • No need to call SDL_PauseAudio. Again, this is for the old API and only for device id 1.

Hello ChliHug, and thank you for your answer!

I’ve changed the SDL_MixAudio to SDL_MixAudioFormat, as you said, and… It’s all right!! I now have the Rolling Stones played!!

About SDL_AUDIO_ALLOW_FORMAT_CHANGE: the first example I tried was from a file using queuing instead of callback, I believe I found it on the documentation of SDL_Audio. I’ve been asking myself about the need of keeping it like that, thinking that a regular wave file from a CD would probably be ok. So this Format_Change option would preferably be used when using heavy settings (like very high sampling frequency with big sample size), or non common settings (like a low non-standard sampling frequency), that’s it?

Thank you about the remark on my print statement. It was clear to me, after reading the doc, than the SDL_MixAudio needed not samples but bytes, but I didn’t understood the same for values from LoadWAV. By the way I should have, as I was wondering why the pointer to buffer was to an uint8, instead of - in my case - uint16 or int16. So the library always uses a byte pointer, and internally deals with whatever is needed.

A word about your note on the need to set buffer to 0 in the callback: I’m not sure I understand it. I should write the buffer to 0 before to adjust it size according to the number of bytes left to read, is that it? So at the end of the file playing, I won’t have, say, one half buffer filled with sound, and the other half filled with undefined data. This seems quite logical, in fact! :slight_smile:

And a last word about SDL_MixAudioFormat. It’s said in the doc that it’s not needed. So I can just copy bytes “by hand” from my user-buffer to the stream buffer needed in the callback, taking care of endianness and type conversion? And then do whatever mixing I need, more than two files, data from wave table I create, and so on?

Again, thank you for your help, I am now able to go on on this project!

Regarding format change: I’m not that familiar with all the audio backends, but I think they should choose their native formats and sample rates if you allow it to change it. This is useful to avoid multiple conversions when one would suffice or you just want to know what the backend is capable of.

Regarding the callback: Yes, exactly. In your code you can just move the SDL_memset up to the first line in the callback.

SDL_MixAudioFormat is just in there to get some simple mixing done. Once you have more sounds you need to implement something more complex. If you only going to work with one format and sample rate, you can indeed just add, scale, and copy the integers over into the buffer (given that you force SDL to do the conversion should one be necessary). There’s also SDL_mixer which builds on top of the SDL audio API and might do the trick for you. I don’t have much experience with it, sadly.

Thank for this quick reply!

I’ve seen the SDL_Mixer library, in fact I even saw it before the bare SDL library. I’ve tried to use it, but it gives me compilation errors that I wasn’t able to resolve. I should have a deeper insight, but it seems it has to do with C++ 11 changing some of the way char array are handled, causing undefined references to standard functions. So I let it aside for now, even if indeed some of the functions of SDL_Mixer would be of great use for my project!

Thank you again for your advices!