WAV differences between Win ME and Win XP

Hello all,

I have a Wav file I have been able to play successfully in windows me,
but it creates some rather static output in windows XP. I am using the
current version of SDL_Mixer and the sound is a 16 bits/sec 44100 rate
sample. This is also what I have initialized the audio to be. The
sample plays fine through winamp on both machines, but when I call
Mix_PlayChannel(-1,Sample,0) on the XP side the sample plays with a
bunch of static. Anyone have an idea of what I am doing wrong?

Robert

Hello !

Is it possible to put a test.zip or .tar.gz
online with your code and your wav file ?

CU> Hello all,

I have a Wav file I have been able to play successfully in windows me,
but it creates some rather static output in windows XP. I am using the
current version of SDL_Mixer and the sound is a 16 bits/sec 44100 rate
sample. This is also what I have initialized the audio to be. The
sample plays fine through winamp on both machines, but when I call
Mix_PlayChannel(-1,Sample,0) on the XP side the sample plays with a
bunch of static. Anyone have an idea of what I am doing wrong?

Robert


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I have absolutely no idea where I would put it for public consumption.
Is there a public ftp site of some sort that could be used? The sample
itself is just about 32k, zipped about 16k, a sample program to show the
problem can’t be more then a couple k, is it possible just to post it to
the list, or is ~20k too big?
RobertOn Tue, 2003-12-09 at 05:47, Torsten Giebl wrote:

Hello !

Is it possible to put a test.zip or .tar.gz
online with your code and your wav file ?

CU

Hello all,

I have a Wav file I have been able to play successfully in windows me,
but it creates some rather static output in windows XP. I am using the
current version of SDL_Mixer and the sound is a 16 bits/sec 44100 rate
sample. This is also what I have initialized the audio to be. The
sample plays fine through winamp on both machines, but when I call
Mix_PlayChannel(-1,Sample,0) on the XP side the sample plays with a
bunch of static. Anyone have an idea of what I am doing wrong?

Robert


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

One thing I’ve noticed with a lot of my games when I went to test them
on this new [to me] Windows XP laptop I have for work is that many of them
sound very crackly.

I’m guessing it has to do with the buffer size I asked Windows to use.

Too big of a buffer, and sounds lag too long after the event.
(e.g., hit [Space] and your space ship shoots… but then a half second
later you hear a ‘Zzzap!’ noise.)

Too small a buffer, and it doesn’t stay full enough, and suddenly
your sound effects s-ZT-o-ZT-u-ZT-n-ZT-d b-ZT-a-ZT-d :^)

Is there some generic, OS-independent way to code how big the buffer
should be? Like a variable like:

MIXER_BEST_SIZE

which has different values depending on the OS and driver that SDL/SDL_Mixer
detect?

That’d be cool :wink:

-bill!On Tue, Dec 09, 2003 at 11:14:04AM -0600, Robert Diel wrote:

I have absolutely no idea where I would put it for public consumption.

Unfortunately, I expect it’s a function of a lot of things: hardware,
drivers, OS, and probably how much CPU other threads are taking.

When possible, I use a different approach: I open multiple hardware
stremas. When a new sound starts playing, a new stream starts playing,
so there’s almost no perceptible delay. This requires a solid underlying
sound API and good drivers; DirectSound and ALSA will do it, if you have
the right hardware (SBLive counts). The actual buffer size can be
oversized, so there’s no skipping–the startup delay and buffer
writeahead is detached.

I’ve been trying (passively) to think of a good way to handle
determining writeahead on systems where this doesn’t work. My system
can handle changing “buffer sizes”, as I allocate a larger buffersize
and simply change the writeahead. Some logic can be done: if we’re
underrunning, increase the writeahead. This has a couple problems:

  1. I don’t want any little underrun to permanently cause the writeahead
    to be boosted. Some system events (eg. AIM stealing focus) are very
    likely to cause an underrun regardless of the writeahead.

  2. Some skips may not actually be detectable as underruns. Sometimes,
    the buffer will be small enough that a bad driver can’t quite keep up,
    but high enough that it doesn’t look like an underrun to the
    application.On Tue, Dec 09, 2003 at 12:16:29PM -0800, Bill Kendrick wrote:

Is there some generic, OS-independent way to code how big the buffer
should be? Like a variable like:

MIXER_BEST_SIZE

which has different values depending on the OS and driver that SDL/SDL_Mixer
detect?


Glenn Maynard

So the long and short of it is keep playing with the size of the buffer
until you find a happy medium. This can either be done dynamically as
proposed by Glen Maynard, or statically as proposed by Bill Kendrick.
Honestly, my solution doesn’t need to rely on being best of breed for
all systems, my install base will reach 10 at best, some Win ME, some
Win XP, and a bit of 2000. So, I think I’ll explore it from the static
buffer size perspective. I’ll let y’all know if I come up with a magic
number.

RobertOn Tue, 2003-12-09 at 14:47, Glenn Maynard wrote:

On Tue, Dec 09, 2003 at 12:16:29PM -0800, Bill Kendrick wrote:

Is there some generic, OS-independent way to code how big the buffer
should be? Like a variable like:

MIXER_BEST_SIZE

which has different values depending on the OS and driver that SDL/SDL_Mixer
detect?

Unfortunately, I expect it’s a function of a lot of things: hardware,
drivers, OS, and probably how much CPU other threads are taking.

When possible, I use a different approach: I open multiple hardware
stremas. When a new sound starts playing, a new stream starts playing,
so there’s almost no perceptible delay. This requires a solid underlying
sound API and good drivers; DirectSound and ALSA will do it, if you have
the right hardware (SBLive counts). The actual buffer size can be
oversized, so there’s no skipping–the startup delay and buffer
writeahead is detached.

I’ve been trying (passively) to think of a good way to handle
determining writeahead on systems where this doesn’t work. My system
can handle changing “buffer sizes”, as I allocate a larger buffersize
and simply change the writeahead. Some logic can be done: if we’re
underrunning, increase the writeahead. This has a couple problems:

  1. I don’t want any little underrun to permanently cause the writeahead
    to be boosted. Some system events (eg. AIM stealing focus) are very
    likely to cause an underrun regardless of the writeahead.

  2. Some skips may not actually be detectable as underruns. Sometimes,
    the buffer will be small enough that a bad driver can’t quite keep up,
    but high enough that it doesn’t look like an underrun to the
    application.


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hate to reply to myself, but the magic number for my set of problems
seemed to be 2048. Works well, with this sample, on both ME and XP.

RobertOn Tue, 2003-12-09 at 16:20, Robert Diel wrote:

So the long and short of it is keep playing with the size of the buffer
until you find a happy medium. This can either be done dynamically as
proposed by Glen Maynard, or statically as proposed by Bill Kendrick.
Honestly, my solution doesn’t need to rely on being best of breed for
all systems, my install base will reach 10 at best, some Win ME, some
Win XP, and a bit of 2000. So, I think I’ll explore it from the static
buffer size perspective. I’ll let y’all know if I come up with a magic
number.

Robert

On Tue, 2003-12-09 at 14:47, Glenn Maynard wrote:

On Tue, Dec 09, 2003 at 12:16:29PM -0800, Bill Kendrick wrote:

Is there some generic, OS-independent way to code how big the buffer
should be? Like a variable like:

MIXER_BEST_SIZE

which has different values depending on the OS and driver that SDL/SDL_Mixer
detect?

Unfortunately, I expect it’s a function of a lot of things: hardware,
drivers, OS, and probably how much CPU other threads are taking.

When possible, I use a different approach: I open multiple hardware
stremas. When a new sound starts playing, a new stream starts playing,
so there’s almost no perceptible delay. This requires a solid underlying
sound API and good drivers; DirectSound and ALSA will do it, if you have
the right hardware (SBLive counts). The actual buffer size can be
oversized, so there’s no skipping–the startup delay and buffer
writeahead is detached.

I’ve been trying (passively) to think of a good way to handle
determining writeahead on systems where this doesn’t work. My system
can handle changing “buffer sizes”, as I allocate a larger buffersize
and simply change the writeahead. Some logic can be done: if we’re
underrunning, increase the writeahead. This has a couple problems:

  1. I don’t want any little underrun to permanently cause the writeahead
    to be boosted. Some system events (eg. AIM stealing focus) are very
    likely to cause an underrun regardless of the writeahead.

  2. Some skips may not actually be detectable as underruns. Sometimes,
    the buffer will be small enough that a bad driver can’t quite keep up,
    but high enough that it doesn’t look like an underrun to the
    application.


Glenn Maynard


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Robert Diel wrote:

Hate to reply to myself, but the magic number for my set of problems
seemed to be 2048. Works well, with this sample, on both ME and XP.

Robert

I had to use 4096 on a lower-end machine under windows 98 to be on the
safe side. However, you can “see” the audio latency…

Stephane

Quoting Stephane Marchesin <stephane.marchesin at wanadoo.fr>:

Robert Diel wrote:

Hate to reply to myself, but the magic number for my set of problems
seemed to be 2048. Works well, with this sample, on both ME and XP.

Robert

I had to use 4096 on a lower-end machine under windows 98 to be on the
safe side. However, you can “see” the audio latency…

Stephane

What other issues cause “audio latency”? d2x has problems with this on (some)
windows machines.

-brad

Quoting Bradley Bell <@Bradley_Bell>:

Quoting Stephane Marchesin <stephane.marchesin at wanadoo.fr>:

Robert Diel wrote:

Hate to reply to myself, but the magic number for my set of problems
seemed to be 2048. Works well, with this sample, on both ME and XP.

Robert

I had to use 4096 on a lower-end machine under windows 98 to be on the
safe side. However, you can “see” the audio latency…

Stephane

What other issues cause “audio latency”? d2x has problems with this on
(some)
windows machines.

Heh, I didn’t finish my thought. Buffer size is already set low at 512, and 256
results in distortion.

-brad

Heh, I didn’t finish my thought. Buffer size is already set low at 512, and 256
results in distortion.

Those may be WAY too low, depending on audio device sample rate. This
appears to work well for me:

if (desired.freq <= 11025)
desired.samples = 512;
else if (desired.freq <= 22050)
desired.samples = 1024;
else if (desired.freq <= 44100)
desired.samples = 2048;
else
desired.samples = 4096; // (shrug)

Please remember that SDL might be feeding the audio device in a
different format/sample rate than you are feeding SDL, so you might want
to check for this and adjust accordingly.

–ryan.

Quoting “Ryan C. Gordon” :

Heh, I didn’t finish my thought. Buffer size is already set low at 512,
and 256
results in distortion.

Those may be WAY too low, depending on audio device sample rate. This
appears to work well for me:

if (desired.freq <= 11025)
desired.samples = 512;
else if (desired.freq <= 22050)
desired.samples = 1024;
else if (desired.freq <= 44100)
desired.samples = 2048;
else
desired.samples = 4096; // (shrug)

sample rate is 11025. 22050 is available, but I never looked into why it didn’t
sound good until yesterday. raising buffer to 1024 did the trick. But raising
it will never decrease latency, will it?

What determines the lower limit on buffer size, anyway? is it simply:
sample_rate / f

where f is the number of times your callback executes per second?

-brad> Please remember that SDL might be feeding the audio device in a

different format/sample rate than you are feeding SDL, so you might want
to check for this and adjust accordingly.

–ryan.