See when SDL_Mixer performs a callback

Hi,

For my project I’m interested in determining the current position of a playing chunk but SDL_Mixer doesn’t really offer a function to do this. My thoughts were to see how many times the callback function is called to determine a usable estimate of how many bytes have been processed as the sound is playing and go from there.

Alternatives I’ve seen include creating an SDL timer and hoping everything lines up closely enough or using Mix_RegisterEffect to send the audio through an effect module which won’t alter the sound and use the callback function from that. Both of these seem like unnecessary additional processes considering Mixer is probably using its own callback functions behind the scenes.

In this case, the process of determining how far along in the chunk you are would actually be easier to implement in the stock SDL library as you have more influence on the workings of the callback function. You could easily implement an integer that increased by the length in bytes of the callback each time it was called.

Just from typing this out, it occurred to me that the case might be that the callback used in Mixer may only be applied to the whole mixdown of all the channels rather than a separate function being used per channel. This makes sense and would explain why sending the audio through an effect would be a working solution, but I’m not 100% sure so I’ll ask anyway to gain more insight.

Thanks,
tewii

When using the stock SDL2 library you don’t even need to modify the callback function: you can find out how much of the chunk is still queued (and from that deduce how much has been dequeued) using SDL_GetQueuedAudioSize().

I see. Since I’m using SDL_Mixer to play multiple effects simultaneously, do you know if I could somehow apply this function to a Mix_Chunk? Alternatively, would it be feasible to, within my audio object, load the same sound file with both Mixer and stock SDL, using the former for playback and the latter for checking queued audio? It doesn’t sound like it would work, and even if it did, wouldn’t be a particularly elegant solutions. Especially when playing multiple sounds given stock SDL’s lack of advanced native mixing functions

That being said, my understanding of SDL isn’t particularly thorough.

Thanks

I’m afraid I’ve never used SDL_Mixer; I suppose you would need to discover the device ID that it is using under the hood (if that is even how it works). I’ve mixed sounds using stock SDL2 (using this cheat) but although it works surprisingly well it isn’t comparable to the capabilities of SDL_Mixer.

Hi,

No worries. SDL_Mixer definitely makes life a lot easier in terms of taking care of tedious things like audio conversion and the actual mixing of audio files.

But since it is tricky to look under the hood and modify it to get some of these functions that are pretty important without using hacky techniques like the ones mentioned in my original post it kind of makes me want to make my own mixer built from stock SDL. Perhaps I might even just use PortAudio to handle the audio side of the program and SDL to handle graphics and input. Again, whilst I’m sure it would be a valuable learning experience, this just seems like additional effort and bulk to add to the project.

An idea I have of how to implement a mixer in stock SDL includes having a timeline of bytes so to speak, that begins when a sound starts playing. The program will constantly check how far along the sound file is on this timeline, probably by checking how many callbacks have occurred or by using SDL_GetQueuedAudioSize. Then if another sound is played (converted to the same format if need be), mix it with the first sound starting from how many bytes have already been played relative to the timeline and send it the audio device. Of course, this is easier said than done but it seems like a viable method and would give me control in terms of building related functions.

Since you have some experience writing custom mixers, is there any chance you could offer advice or roadblocks you encountered so I can plan accordingly?

Thanks again.

The method you described is basically what I am doing, as noted in the thread I linked to. The main difference is that I am triggering the ‘second’ (or subsequent) sound in real-time according to when the program wants it (with as little latency as possible), whereas it seems you need to mix it at a known time offset from the start of the ‘first’ sound. In some ways your requirement is easier to achieve than mine.

Yeah, thanks. I guess if I’m using callbacks to handle audio and check position I can minimize latency by using smaller buffers so that there are more callbacks occurring in a given time frame, while still remaining within acceptable limits for smooth, stutter-free processing.

Since my program needs to allow the user to upload and play their own sound files, the method of using queued audio would bring with it the limitation mentioned in the linked thread, where the second sound needs to be shorter in length than the remainder of the first sound might be too restrictive. Since I don’t know what sound files the user will be uploading or how many sounds they will be playing simultaneously I think the sacrifice in latency is the better trade-off in my scenario.

Thanks again, always a great help!