(I’m working on about three things at once here…)
Okay, X11 SDL is rock solid, and no longer uses semaphores by default.
The Win32 SDL code is on hold because I don’t know of any way to get
asynchronous notification if the event loop is running in the main thread.
The BeOS SDL code is broken, until I install BeIntel.
Okay, I’ve been working on porting code which was given to me… njh…
There’s a basic problem between my idea of sound mixing and that program.
SDL uses a periodic callback to mix a relatively small chunk of sound
(~46 ms worth), while this program queries the audio DMA position at each
frame, and mixes the next 100 ms or so.
The problem is, this is completely incompatible with BeOS and Solaris.
Linux has some hacks to get at the DMA buffer, and Win32 emulates it.
Is this a very common way to write a mixer? It makes sense, but is
tough to implement without emulation on some platforms.
For example, on BeOS, I can keep a separate set of “DMA” buffers, made
available to the application, and each time the OS passes SDL a set of
buffers to fill, I can copy the virtual DMA buffers to them and update
the DMA read pointer. Of course, this will result in a duplicate copy
of the sound buffers.
The alternative is to write code that mixes the audio directly, based
on the state of the application, at the time the audio driver requests
it, instead of mixing the next frame or two every update.
Which is better? Thoughts anyone?