You figured this out, but I wanted to expand on the explanation for anyone finding this on a Google search:
The division is correct: if len=10, it wants you to fill a buffer of ten 8-bit bytes, but “out” is an array of 16-bit ints, so writing to out[0] would be bytes 1 and 2, and out[9] would be bytes 19 and 20, which means you’re writing more audio data than you need (hence the weird sound because you’re missing pieces of your sine wave in the playback when SDL doesn’t look past the array it wanted filled) and also you’re overwriting random memory and all bets are off.
(This callback uses a number way higher than 10 in practice, that’s just for clarity here.)
So len / 2 (more specifically: len / sizeof (Sint16)) means the last thing you write to is out[4], for bytes 9 and 10, and everythings works out. If you had used Float32 format, it would be len/4…sizeof (float).
Also note that this is for Mono, for Stereo it’d be
len/(2*sizeof(Sint16)) because there’s one 2 byte (1 Sint16) sample for
the left and one for the right channel.
By the way, Ryan, why does the callback use Uint8* and not just void*?
With void* it’d be clearer that the actual type varies.
why does the callback use Uint8* and not just void*?
I always assumed it was to make it easy to move around on byte offsets regardless of your data (void is clearer, but you can’t do myVoidPtr += sizeof(Sint16). Maybe we should change that for 2.1, though.
hmm it’s true that this way it’s easier to work with byte offsets, but I guess the usual (and easier+more intuitive) way is to cast it to Sint16 (or whatever is appropriate for the used sampleformat) before operating in the pointer