OSS audio driver(s)

In 1.2.6, the buffer size passed to SDL_OpenAudio() controls the fragment
size.

Wouldn’t it make more sense to keep the fragment size low, to increase
responsiveness, and instead calculate the number of fragments based on the
buffer size passed to SDL_OpenAudio()?

In my own code that uses /dev/dsp, I’ve always had better results(fewer
dropouts, and better response times) using a small fragment size with many(6
to 8) fragments, though I’ve only tested said code on a few devices, such as
the SB Pro, an ES1371 card, and an SB Live.

Does anyone else have any experience in this area?

I’ve made a patch against 1.2.6’s SDL_dspaudio.c(for the “dsp” audio driver).

http://xodnizel.net/junk/SDL_dspaudio.c.diff

It works fine for me(with my ES1371 card, 1.2GHz Celeron CPU), but if anyone
else could try it and see if it works, I’d appreciate it.On Sunday 14 December 2003 16:40, xodnizel wrote:

In 1.2.6, the buffer size passed to SDL_OpenAudio() controls the fragment
size.

Wouldn’t it make more sense to keep the fragment size low, to increase
responsiveness, and instead calculate the number of fragments based on the
buffer size passed to SDL_OpenAudio()?

In my own code that uses /dev/dsp, I’ve always had better results(fewer
dropouts, and better response times) using a small fragment size with
many(6 to 8) fragments, though I’ve only tested said code on a few
devices, such as the SB Pro, an ES1371 card, and an SB Live.

Does anyone else have any experience in this area?


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I’ve made a patch against 1.2.6’s SDL_dspaudio.c(for the “dsp” audio driver).

http://xodnizel.net/junk/SDL_dspaudio.c.diff

What’s the advantage of having smaller fragments that add up to the requested
buffer size? You’re still writing in blocks of the buffer size… ?

The only advantage I know of is when you’re actually doing DMA, and writing
out several buffers in advance, and re-writing them as often as possible.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

I’m having a hard time thinking of how to explain my concerns…but here’s my
attempt:

It would help to reduce the size of FIFO buffers in certain applications, and
make synchronization between sound output and sound generation simpler(when
also outputting video).

In my case, my program is generating audio data every frame(1/60 second), and
it needs to be outputted, so I write it to a FIFO buffer internal to my
application. In the sound callback function, I read from this FIFO.

Currently, my program is blocking whenever the FIFO is full and it still needs
to write to it. With large fragment sizes, this causes jerkiness in screen
updates(some occur too close to each other, others occur too far apart).
I could make my FIFO larger, and add some sort of “jitter” room, where it has
a preferred size and a maximum size, and use ms-accurate time reporting
functions, but this would add extra latency, which I really don’t want to
have in a game emulator.On Sunday 14 December 2003 19:24, Sam Lantinga wrote:

I’ve made a patch against 1.2.6’s SDL_dspaudio.c(for the “dsp” audio
driver).

http://xodnizel.net/junk/SDL_dspaudio.c.diff

What’s the advantage of having smaller fragments that add up to the
requested buffer size? You’re still writing in blocks of the buffer
size… ?

The only advantage I know of is when you’re actually doing DMA, and writing
out several buffers in advance, and re-writing them as often as possible.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I just checked out SDL_audio.c, and it appears to assume only double-buffering
is occurring, making my patch much less useful, in regards to the situation I
described below.

Time for rethinking…On Sunday 14 December 2003 22:34, xodnizel wrote:

I’m having a hard time thinking of how to explain my concerns…but here’s
my attempt:

It would help to reduce the size of FIFO buffers in certain applications,
and make synchronization between sound output and sound generation
simpler(when also outputting video).

In my case, my program is generating audio data every frame(1/60 second),
and it needs to be outputted, so I write it to a FIFO buffer internal to my
application. In the sound callback function, I read from this FIFO.

Currently, my program is blocking whenever the FIFO is full and it still
needs to write to it. With large fragment sizes, this causes jerkiness in
screen updates(some occur too close to each other, others occur too far
apart). I could make my FIFO larger, and add some sort of “jitter” room,
where it has a preferred size and a maximum size, and use ms-accurate time
reporting functions, but this would add extra latency, which I really don’t
want to have in a game emulator.

On Sunday 14 December 2003 19:24, Sam Lantinga wrote:

I’ve made a patch against 1.2.6’s SDL_dspaudio.c(for the “dsp” audio
driver).

http://xodnizel.net/junk/SDL_dspaudio.c.diff

What’s the advantage of having smaller fragments that add up to the
requested buffer size? You’re still writing in blocks of the buffer
size… ?

The only advantage I know of is when you’re actually doing DMA, and
writing out several buffers in advance, and re-writing them as often as
possible.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl