Endianness in SDL_audio.c

I had a small problem with an audio stream in LSB while the application
ran on an Sparc (big-endian). It took some time tracking it down, but I
believe I have found a solution. However, I’m not sure if the original
code was intended that way or not, but this Works For Me ™:

ft at alne:/usr/junk/ft/src/misc/SDL-1.2.8/src/audio>diff -Naur
SDL_audio.c.old SDL_audio.c
— SDL_audio.c.old Mon Dec 13 08:54:31 2004
+++ SDL_audio.c Thu Jan 27 13:51:51 2005
@@ -468,7 +468,8 @@
/* See if we need to do any conversion */
if ( obtained != NULL ) {
memcpy(obtained, &audio->spec, sizeof(audio->spec));

  •   } else if ( desired->freq != audio->spec.freq ||
    
  •   } 
    
  •   if ( desired->freq != audio->spec.freq ||
                   desired->format != audio->spec.format ||
                  desired->channels != audio->spec.channels ) {
              /* Build an audio conversion block */
    

Could somebody who knows the audio subsystem comment on the sanity of the
fix?

Regards,
-Frode

Le Thu, 27 Jan 2005 15:40:06 +0100 (CET)
Frode Tenneb? a ?crit:

I had a small problem with an audio stream in LSB while the application
ran on an Sparc (big-endian). It took some time tracking it down, but I
believe I have found a solution. However, I’m not sure if the original
code was intended that way or not, but this Works For Me ™:

ft at alne:/usr/junk/ft/src/misc/SDL-1.2.8/src/audio>diff -Naur
SDL_audio.c.old SDL_audio.c
— SDL_audio.c.old Mon Dec 13 08:54:31 2004
+++ SDL_audio.c Thu Jan 27 13:51:51 2005
@@ -468,7 +468,8 @@
/* See if we need to do any conversion */
if ( obtained != NULL ) {
memcpy(obtained, &audio->spec, sizeof(audio->spec));

  •   } else if ( desired->freq != audio->spec.freq ||
    
  •   } 
    
  •   if ( desired->freq != audio->spec.freq ||
                   desired->format != audio->spec.format ||
                  desired->channels != audio->spec.channels ) {
              /* Build an audio conversion block */
    

Could somebody who knows the audio subsystem comment on the sanity of
the fix?

You’re right somewhere. I think there is also a problem. I build SDL for
Atari (m68k, big-endian), and the loopwave SDL demo program was not
playing sample.wav correctly. And I was wondering where the bug was. More
investigation needed.–
Patrice Mandin
WWW: http://membres.lycos.fr/pmandin/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

Le Thu, 27 Jan 2005 15:40:06 +0100 (CET)
Frode Tenneb? a ?crit:

I had a small problem with an audio stream in LSB while the application
ran on an Sparc (big-endian). It took some time tracking it down, but I
believe I have found a solution. However, I’m not sure if the original
code was intended that way or not, but this Works For Me ™:

You’re right somewhere. I think there is also a problem. I build SDL for
Atari (m68k, big-endian), and the loopwave SDL demo program was not
playing sample.wav correctly. And I was wondering where the bug was. More
investigation needed.

I have a Mac running Linux, and have hit this problem
in many programs.

The root of the problem is that developers use AUDIO_S16
when they should be using AUDIO_S16SYS. I recently fixed
Tux Paint, then patched up a wiki somewhere to warn about
the issue. App developers are pretty much led into using
AUDIO_S16 instead of the correct AUDIO_S16SYS.

As far as I can tell, one should never use AUDIO_S16 in
app code. It should be removed from the headers I think.

SDL_mixer.h has a MIX_DEFAULT_FORMAT that really should be
defined as AUDIO_S16SYS.On Thu, 2005-01-27 at 16:37 +0100, Patrice Mandin wrote:

Le Thu, 27 Jan 2005 10:49:25 -0500
Albert Cahalan a ?crit:

The root of the problem is that developers use AUDIO_S16
when they should be using AUDIO_S16SYS. I recently fixed
Tux Paint, then patched up a wiki somewhere to warn about
the issue. App developers are pretty much led into using
AUDIO_S16 instead of the correct AUDIO_S16SYS.

As far as I can tell, one should never use AUDIO_S16 in
app code. It should be removed from the headers I think.

SDL_mixer.h has a MIX_DEFAULT_FORMAT that really should be
defined as AUDIO_S16SYS.

I agree. Seeing something like this in SDL_audio.h:

/* Audio format flags (defaults to LSB byte order) */
[snip]
#define AUDIO_U16 AUDIO_U16LSB
#define AUDIO_S16 AUDIO_S16LSB

is not ok, when you have AUDIO_[U|S]16SYS correctly defined for
endianness. And the comment about the default byte order should be
removed. Why default to LSB byte order ?

SDL_wave.c should also be patched, to change AUDIO_S16 to AUDIO_S16LSB for
spec->format. Better: check that MS_ADPCM_decode() and IMA_ADPCM_decode()
functions really decode in AUDIO_S16LSB, not in the native format. I
wonder if it decodes (wrongly!) to AUDIO_S16MSB on a big endian machine,
whereas the spec->format is set to AUDIO_S16[LSB].–
Patrice Mandin
WWW: http://membres.lycos.fr/pmandin/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

That’s for *.wav file decoding, right?

If you flipped that, I think you’d undo the proposed fix
and break the Tux Paint in CVS. I also think that a *.wav
file should end up in native byte order so that it can
be operated on in various ways.

Tux Paint loads *.wav files via Mix_LoadWAV, then plays
them via Mix_PlayChannel. AUDIO_S16SYS is used and works.
Previously, AUDIO_S16 was used and did not work on ppc Linux.On Thu, 2005-01-27 at 19:44 +0100, Patrice Mandin wrote:

Le Thu, 27 Jan 2005 10:49:25 -0500
Albert Cahalan a ?crit:

The root of the problem is that developers use AUDIO_S16
when they should be using AUDIO_S16SYS. I recently fixed
Tux Paint, then patched up a wiki somewhere to warn about
the issue. App developers are pretty much led into using
AUDIO_S16 instead of the correct AUDIO_S16SYS.

As far as I can tell, one should never use AUDIO_S16 in
app code. It should be removed from the headers I think.

SDL_mixer.h has a MIX_DEFAULT_FORMAT that really should be
defined as AUDIO_S16SYS.

I agree. Seeing something like this in SDL_audio.h:

/* Audio format flags (defaults to LSB byte order) */
[snip]
#define AUDIO_U16 AUDIO_U16LSB
#define AUDIO_S16 AUDIO_S16LSB

is not ok, when you have AUDIO_[U|S]16SYS correctly defined for
endianness. And the comment about the default byte order should be
removed. Why default to LSB byte order ?

SDL_wave.c should also be patched, to change AUDIO_S16 to AUDIO_S16LSB for
spec->format. Better: check that MS_ADPCM_decode() and IMA_ADPCM_decode()
functions really decode in AUDIO_S16LSB, not in the native format. I
wonder if it decodes (wrongly!) to AUDIO_S16MSB on a big endian machine,
whereas the spec->format is set to AUDIO_S16[LSB].

Le Thu, 27 Jan 2005 15:40:06 +0100 (CET)

Frode Tenneb? <@Frode_Tennebo> a ?crit:

I had a small problem with an audio stream in LSB while the
application ran on an Sparc (big-endian). It took some time
tracking it down, but I believe I have found a solution. However,
I’m not sure if the original code was intended that way or not, but
this Works For Me ™:

[patch snipped]

You’re right somewhere. I think there is also a problem. I build SDL
for Atari (m68k, big-endian), and the loopwave SDL demo program was
not playing sample.wav correctly. And I was wondering where the bug
was. More investigation needed.

Are you saying that my patch did not work correctly with the SDL demo?

-Frode–
^ Frode Tenneb? | email: frode at tennebo.com | Frode at IRC ^
| with Standard.Disclaimer; use Standard.Disclaimer; |

The root of the problem is that developers use AUDIO_S16
when they should be using AUDIO_S16SYS. I recently fixed
Tux Paint, then patched up a wiki somewhere to warn about
the issue. App developers are pretty much led into using
AUDIO_S16 instead of the correct AUDIO_S16SYS.

I am aware of the problem with AUDIO_S16 vs. AUDIO_S16SYS. In my case I
have sound data produced from a separate library which delivers this in
LSB. SPARC expects MSB and the way I read the SDL code it has the
ability to convert between representations.

As far as I can tell, one should never use AUDIO_S16 in
app code. It should be removed from the headers I think.

I’m (pretty - I’ll check again tomorrow) sure I tried both before and
after applying my fix and the result was the same. audio->spec.format =
AUDIO_S16MSB on SPARC.

I’m a novice with SDL audio so it could be that I’m missing something
tho’. If so, what?

-Frode> On Thu, 2005-01-27 at 16:37 +0100, Patrice Mandin wrote:


^ Frode Tenneb? | email: frode at tennebo.com | Frode at IRC ^
| with Standard.Disclaimer; use Standard.Disclaimer; |

I found the cause of my problems with loopwave.c: the SDL_MixAudio()
function in SDL_mixer.c. If I replace the calls to SDL_MixAudio() by a
simple memcpy() in fillerup() in loopwave.c, the replay is correct.

I disabled the asm m68k mixer code I wrote for SDL_mixer, thinking it was
causing problems, and it was not. SDL_MixAudio() takes into account the
source format, but not the destination format. Maybe this is what is
causing problems for me.–
Patrice Mandin
WWW: http://membres.lycos.fr/pmandin/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

Le Fri, 28 Jan 2005 14:04:18 +0100
Patrice Mandin <mandin.patrice at wanadoo.fr> a ?crit:

I found the cause of my problems with loopwave.c: the SDL_MixAudio()
function in SDL_mixer.c. If I replace the calls to SDL_MixAudio() by a
simple memcpy() in fillerup() in loopwave.c, the replay is correct.

Well, in fact I was wrong. It was simply the buffer used for conversion
(audio->convert.buf) that was not cleared between calls, so it was filling
with more and more junk. Apologizes

Maybe we could use SDL_SwapBE16() and SDL_SwapLE16() in SDL_MixAudio() to
mix AUDIO_S16MSB/AUDIO_S16LSB. It would take less operations where these
functions are assembly optimized.–
Patrice Mandin
WWW: http://membres.lycos.fr/pmandin/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

I am aware of the problem with AUDIO_S16 vs. AUDIO_S16SYS. In my case
I have sound data produced from a separate library which delivers
this in LSB. SPARC expects MSB and the way I read the SDL code it has
the ability to convert between representations.

As far as I can tell, one should never use AUDIO_S16 in
app code. It should be removed from the headers I think.

I’m (pretty - I’ll check again tomorrow) sure I tried both before and
after applying my fix and the result was the same. audio->spec.format
= AUDIO_S16MSB on SPARC.

It’s like this: If I specify AUDIO_S16 or AUDIO_S16LSB I get
AUDIO_S16MSB. Naturally, If I specify AUDIO_S16SYS I get AUDIO_S16SYS.
Both give the same distorted sound since the sound data themselves are
16 bit LSB. In the first case I could possibly detect that desired !=
obtained and do some byte swapping myself, but that’s what I thought
the audio conversion block feature was for? Why should I implement
something generic which is already there? And what is the audio
conversion block feature for if not for the above (which I think my
simple patch fixes)? Please enlighten me.

The setting is still the same: LSB sound data on SPARC.

Regards,
-FrodeOn Friday 28 January 2005 00:29, Frode Tenneb? wrote:


^ Frode Tenneb? | email: frode at tennebo.com | Frode at IRC ^
| with Standard.Disclaimer; use Standard.Disclaimer; |

I’m confused. I double checked SDL_wave.c and SDL_mixer.c, and it’s correctly
decoding to the audio output endianness. It does assume that the data going in
is the same endianness though. Maybe that’s the problem?

In the case of loopwave, it should load the WAV file in LSB format, and then
open the audio device in LSB format, which should cause the SDL audio driver
to perform endian conversion on the stream if the native audio device doesn’t
support little endian audio output. Then, the SDL mixing functions should be
mixing little-endian to little-endian.

What’s actually happening?

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Le Sat, 12 Feb 2005 10:54:24 -0800
Sam Lantinga a ?crit:

I’m confused. I double checked SDL_wave.c and SDL_mixer.c, and it’s
correctly decoding to the audio output endianness. It does assume
that the data going in is the same endianness though. Maybe that’s
the problem?

In the case of loopwave, it should load the WAV file in LSB format,
and then open the audio device in LSB format, which should cause the
SDL audio driver to perform endian conversion on the stream if the
native audio device doesn’t support little endian audio output. Then,
the SDL mixing functions should be mixing little-endian to
little-endian.

What’s actually happening?

This is fixed. It was a problem in my audio driver, where the memory
buffer was not cleared between calls to the application callback
function, so it was filling with junk. loopwave plays correctly the
sample.wav on my m68k system now.

Also, could you have a look at this:
http://www.libsdl.org/pipermail/sdl/2005-February/067259.html
Stephane is against such a patch, but it should be faster on some CPUs.
I also noticed that some SDL audio conversion functions in
SDL_audiocvt.c could also be modified the same way.–
Patrice Mandin
WWW: http://membres.lycos.fr/pmandin/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

I’m confused. I double checked SDL_wave.c and SDL_mixer.c, and it’s
correctly decoding to the audio output endianness. It does assume
that the data going in is the same endianness though. Maybe that’s
the problem?

This is indeed the problem.

What’s actually happening?

It depends on who you ask. :slight_smile:

My patch (see:
http://www.libsdl.org/pipermail/sdl/2005-January/067081.html) fixes, I
believe, the case where the endianness (or frequency/no. of channels -
though, only endianness is tested) of the platform the code is running
on does not match that of the input.

-FrodeOn Saturday 12 February 2005 19:54, Sam Lantinga wrote:


^ Frode Tenneb? | email: frode at tennebo.com | Frode at IRC ^
| with Standard.Disclaimer; use Standard.Disclaimer; |

I had a small problem with an audio stream in LSB while the application
ran on an Sparc (big-endian). It took some time tracking it down, but I
believe I have found a solution. However, I’m not sure if the original
code was intended that way or not, but this Works For Me ™:

ft at alne:/usr/junk/ft/src/misc/SDL-1.2.8/src/audio>diff -Naur
SDL_audio.c.old SDL_audio.c
— SDL_audio.c.old Mon Dec 13 08:54:31 2004
+++ SDL_audio.c Thu Jan 27 13:51:51 2005
@@ -468,7 +468,8 @@
/* See if we need to do any conversion */
if ( obtained != NULL ) {
memcpy(obtained, &audio->spec, sizeof(audio->spec));

  •   } else if ( desired->freq != audio->spec.freq ||
    
  •   } 
    
  •   if ( desired->freq != audio->spec.freq ||
                   desired->format != audio->spec.format ||
                  desired->channels != audio->spec.channels ) {
              /* Build an audio conversion block */
    

Could somebody who knows the audio subsystem comment on the sanity of the
fix?

If obtained != NULL, then it’s assumed that the calling code will provide
audio conversion to the obtained format. You should not create an audio
conversion block if the requested format doesn’t match the actual one, because
then the calling code will be doing the wrong conversion.

Supposedly, SDL_MixAudio() gets the correct audio format and converts to
the right one. Is that what’s happening?

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

If obtained != NULL, then it’s assumed that the calling code will
provide audio conversion to the obtained format.

I was not aware of that. :-/

You should not
create an audio conversion block if the requested format doesn’t
match the actual one, because then the calling code will be doing the
wrong conversion.

The whole point is to avoid the need for both the application (calling
code) and SDL having (basically) the same convertsion code. I don’t
know the rationale for why the original code is as it is, but I thought
it would be the locical place to have all this platform specific
conversion code in SDL. :slight_smile:

Ideally, the application (nor the developer?) should not know (nor care)
what platform it’s running on. But I see that this might break existing
code.

Supposedly, SDL_MixAudio() gets the correct audio format and converts
to the right one. Is that what’s happening?

Not sure I follow you here, if it gets the correct audio format, why
should it convert to the right one?

Currently, no conversion is done in the application in question.

-FrodeOn Sunday 13 February 2005 08:17, Sam Lantinga wrote:

^ Frode Tenneb? | email: frode at tennebo.com | Frode at IRC ^
| with Standard.Disclaimer; use Standard.Disclaimer; |

Hi,

when I try to playback a wav file (44.1KHz) with
SDL_mixer 1.2.5 or SDL_mixer 1.2.6 (with SDL 1.2.7)
on a Win2K professional machine it doesn’t behave the
same for all of them. On some Win2K machines It plays back
nice. But on some it sounds really broken.

Under Linux I don’t have this problem. Is there something
known about this?

thx + cu
rasca–


| Triad Berlin Projektgesellschaft mbH | http://www.triad.de/ |

On some Win2K machines It plays back
nice. But on some it sounds really broken.

Have you tried increasing the buffer in the Mix_OpenAudio() call? From
my experience, 2048 is the minimum.On 14/02/2005, Rasca, you wrote:


Please remove “.ARGL.invalid” from my email when replying.
Incoming HTML mails are automatically deleted.

Olivier Fabre wrote:> On 14/02/2005, Rasca, you wrote:

On some Win2K machines It plays back
nice. But on some it sounds really broken.

Have you tried increasing the buffer in the Mix_OpenAudio() call? From
my experience, 2048 is the minimum.

Hi, thank you very much. I increased from 1024 to 2048 and
now these problems are gone :slight_smile:

cu
rasca


| Triad Berlin Projektgesellschaft mbH | http://www.triad.de/ |

The whole point is to avoid the need for both the application (calling
code) and SDL having (basically) the same convertsion code.

Right, most of the time this is the case, and the calling code can just
pass NULL for obtained because it doesn’t care what the underlying audio
driver is doing.

Supposedly, SDL_MixAudio() gets the correct audio format and converts
to the right one. Is that what’s happening?

Not sure I follow you here, if it gets the correct audio format, why
should it convert to the right one?

Sorry, it’s not doing any conversion, I meant to say “mixing to the right one”.

Currently, no conversion is done in the application in question.

Probably all that needs to happen to make it work is just pass NULL as the
‘obtained’ parameter when opening the audio system.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment