SDL info update & Sound paradigms

(I’m working on about three things at once here…)

Okay, X11 SDL is rock solid, and no longer uses semaphores by default.
The Win32 SDL code is on hold because I don’t know of any way to get
asynchronous notification if the event loop is running in the main thread.
The BeOS SDL code is broken, until I install BeIntel.

Okay, I’ve been working on porting code which was given to me… njh… :slight_smile:

There’s a basic problem between my idea of sound mixing and that program.
SDL uses a periodic callback to mix a relatively small chunk of sound
(~46 ms worth), while this program queries the audio DMA position at each
frame, and mixes the next 100 ms or so.

The problem is, this is completely incompatible with BeOS and Solaris.
Linux has some hacks to get at the DMA buffer, and Win32 emulates it.

Is this a very common way to write a mixer? It makes sense, but is
tough to implement without emulation on some platforms.

For example, on BeOS, I can keep a separate set of “DMA” buffers, made
available to the application, and each time the OS passes SDL a set of
buffers to fill, I can copy the virtual DMA buffers to them and update
the DMA read pointer. Of course, this will result in a duplicate copy
of the sound buffers.

The alternative is to write code that mixes the audio directly, based
on the state of the application, at the time the audio driver requests
it, instead of mixing the next frame or two every update.

Which is better? Thoughts anyone?

See ya!
-Sam Lantinga (slouken at devolution.com)–
Author of Simple DirectMedia Layer -
http://www.devolution.com/~slouken/SDL/

(I’m working on about three things at once here…)

Okay, X11 SDL is rock solid, and no longer uses semaphores by default.

Woo Hoo! Ok, where can I get it?

Okay, I’ve been working on porting code which was given to me… njh… :slight_smile:

I don’t know what you’re talking about… :expressionless: (<— that is my best poker
face!)

There’s a basic problem between my idea of sound mixing and that program.
SDL uses a periodic callback to mix a relatively small chunk of sound
(~46 ms worth), while this program queries the audio DMA position at each
frame, and mixes the next 100 ms or so.

The problem is, this is completely incompatible with BeOS and Solaris.
Linux has some hacks to get at the DMA buffer, and Win32 emulates it.

Yeah, but, like these are going to work on linuxppc :-)…

Is this a very common way to write a mixer? It makes sense, but is
tough to implement without emulation on some platforms.

Yeah, it seems that the only really good audio stuff requires practically
direct access to the hardware. Oh for a decent DSP standard…

The alternative is to write code that mixes the audio directly, based
on the state of the application, at the time the audio driver requests
it, instead of mixing the next frame or two every update.

This would be better, if you can guarantee service.

njhOn Mon, 4 May 1998, Sam Lantinga wrote:

(I’m working on about three things at once here…)
Okay, I’ve been working on porting code which was given to me… njh… :slight_smile:
Having worked with njh’s code, I can understand this statement :slight_smile:
(sorry njh! - I could normally get your MacOS code to run under Linux after
a day or two :slight_smile:

There’s a basic problem between my idea of sound mixing and that program.
SDL uses a periodic callback to mix a relatively small chunk of sound
(~46 ms worth), while this program queries the audio DMA position at each
frame, and mixes the next 100 ms or so.

The problem is, this is completely incompatible with BeOS and Solaris.
Linux has some hacks to get at the DMA buffer, and Win32 emulates it.

Is this a very common way to write a mixer? It makes sense, but is
tough to implement without emulation on some platforms.
Why not use the DMA buffer where possible, and when you can’t, have a thread
running continously, “polling” the sound device, writing as often as it can,
and signalling the main thread when its finished the first of the 2 blocks.
(Polling is in "'s, becuase it can actually do a blocking write to the sound
device normally, instead of continously saying “are you ready?” )

Also, the block size could/should probabally be set on a per-compile basis,
so the program mixes until it fills the buffer.

For example, on BeOS, I can keep a separate set of “DMA” buffers, made
available to the application, and each time the OS passes SDL a set of
buffers to fill, I can copy the virtual DMA buffers to them and update
the DMA read pointer. Of course, this will result in a duplicate copy
of the sound buffers.

The alternative is to write code that mixes the audio directly, based
on the state of the application, at the time the audio driver requests
it, instead of mixing the next frame or two every update.

Which is better? Thoughts anyone?

see above… Writing directly to the DMA buffer will always be better when
available, but other times, we should just emulate the buffer.

cya
Josh----
Arnold’s Laws of Documentation:
(1) If it should exist, it doesn’t.
(2) If it does exist, it’s out of date.
(3) Only documentation for useless programs transcends the
first two laws.

(I’m working on about three things at once here…)
Okay, I’ve been working on porting code which was given to me… njh… :slight_smile:
Having worked with njh’s code, I can understand this statement :slight_smile:
(sorry njh! - I could normally get your MacOS code to run under Linux after
a day or two :slight_smile:

Actually, this isn’t my code…

Sorry to deflate you there… :stuck_out_tongue:

njhOn Tue, 5 May 1998, Joshua Samuel wrote:

(I’m working on about three things at once here…)
Okay, I’ve been working on porting code which was given to me… njh… :slight_smile:
Having worked with njh’s code, I can understand this statement :slight_smile:
(sorry njh! - I could normally get your MacOS code to run under Linux after
a day or two :slight_smile:

Actually, this isn’t my code…

Sorry to deflate you there… :stuck_out_tongue:

:slight_smile:

Didn’t think you would go and write Sound code… maybe a way cool physics
model, giving each pixel on the screen its own velocity and acceleration etc :slight_smile:

cya
Josh> On Tue, 5 May 1998, Joshua Samuel wrote:

Didn’t think you would go and write Sound code… maybe a way cool physics
model, giving each pixel on the screen its own velocity and acceleration etc :slight_smile:

Actually, I’ve been writting some code to do soundsprocket type stuff, but
as I refuses to run atm(and I’m busy… :frowning: ) I don’t see much point in
releasing it… And my ODE tracer is wait for me to get the latest
SDL(which I am doing right now…)…

njhOn Tue, 5 May 1998, Joshua Samuel wrote:

The BeOS SDL code is broken, until I install BeIntel.

For example, on BeOS, I can keep a separate set of “DMA” buffers, made
available to the application, and each time the OS passes SDL a set of
buffers to fill, I can copy the virtual DMA buffers to them and update
the DMA read pointer. Of course, this will result in a duplicate copy
of the sound buffers.
There was a BeDevNews article on mixing audio (not sure if it was MIDI and
AIFF’s or what) as well as associated source available on Be’s web site.

The alternative is to write code that mixes the audio directly, based
on the state of the application, at the time the audio driver requests
it, instead of mixing the next frame or two every update.

As of late I’ve using a BSynth and BSynthMidiFile system where it sends
off its own thread and I let that go before the video display kicks in
(for my current game).

Best Regards,
David