SDL_Audio Improvment

Hey, Im offically starting a thread on sdl_audio’s improvment.

Hey, Im offically starting a thread on sdl_audio’s improvment.

Something…specific in mind?

–ryan.

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.On 14-Jul-2001, Ryan C. Gordon wrote:

Hey, Im offically starting a thread on sdl_audio’s improvment.

Something…specific in mind?

–ryan.

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.

I don’t have any problems with it.

Perhaps you should list some ideas of yours, and we can all discuss them.

Actually, I lied. I want someone to write a better sample rate converter.
But that wouldn’t require an API change.

–ryan.

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.
I don’t have any problems with it.
Perhaps you should list some ideas of yours, and we can all discuss
them.

I didn’t start the thread, but here goes:

Although I know computer graphics programming, I do not know computer
sound programming. This means if you give me two surfaces as input
and ask me to create a third surface as output with the two blended
together, I’ll know what to do. Give me two sound streams and ask me
the same thing, I won’t know what to do, and I’ll want to use someone
else’s mixing routine.

I’d prefer to have the mixing stuff done by SDL and have the
specialized loaders and players in external libraries, able to use the
internal SDL mixing routines, effectively putting half of what I
understand SDL_Mixer to do in the core of SDL.

Does that make sense?–

Olivier A. Dagenais - Software Architect and Developer

Hey, thats a start.On 14-Jul-2001, Ryan C. Gordon wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.

I don’t have any problems with it.

Perhaps you should list some ideas of yours, and we can all discuss them.

Actually, I lied. I want someone to write a better sample rate converter.
But that wouldn’t require an API change.

–ryan.

It quite makes sence. Yes, I had already planned to hack apart sdl_mixer.
Put the mixer in sdl_audio, and put everything else into sdl_music (a
seperate library) but this has the capibility of breaking older programs
that use sdl_mixer for mixing.On 14-Jul-2001, Olivier Dagenais wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.
I don’t have any problems with it.
Perhaps you should list some ideas of yours, and we can all discuss
them.

I didn’t start the thread, but here goes:

Although I know computer graphics programming, I do not know computer
sound programming. This means if you give me two surfaces as input
and ask me to create a third surface as output with the two blended
together, I’ll know what to do. Give me two sound streams and ask me
the same thing, I won’t know what to do, and I’ll want to use someone
else’s mixing routine.

I’d prefer to have the mixing stuff done by SDL and have the
specialized loaders and players in external libraries, able to use the
internal SDL mixing routines, effectively putting half of what I
understand SDL_Mixer to do in the core of SDL.

Does that make sense?

Olivier A. Dagenais - Software Architect and Developer

No problem, this is fine if it is going to be part of 1.3/2.0…
(hint, hint)–

Olivier A. Dagenais - Software Architect and Developer

“Patrick McFarland” wrote in message
news:20010714233907.F253 at raptorcomp.panax.com

It quite makes sence. Yes, I had already planned to hack apart
sdl_mixer.
Put the mixer in sdl_audio, and put everything else into sdl_music
(a
seperate library) but this has the capibility of breaking older
programs
that use sdl_mixer for mixing.

On 14-Jul-2001, Olivier Dagenais wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.
I don’t have any problems with it.
Perhaps you should list some ideas of yours, and we can all
discuss

them.

I didn’t start the thread, but here goes:

Although I know computer graphics programming, I do not know
computer

sound programming. This means if you give me two surfaces as
input

and ask me to create a third surface as output with the two
blended

together, I’ll know what to do. Give me two sound streams and ask
me

the same thing, I won’t know what to do, and I’ll want to use
someone

else’s mixing routine.

I’d prefer to have the mixing stuff done by SDL and have the
specialized loaders and players in external libraries, able to use
the

internal SDL mixing routines, effectively putting half of what I
understand SDL_Mixer to do in the core of SDL.

Does that make sense?


Olivier A. Dagenais - Software Architect and Developer

Add support for midi-out.On Sat, Jul 14, 2001 at 08:10:33PM -0500, Patrick McFarland wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.

On 14-Jul-2001, Ryan C. Gordon wrote:

Hey, Im offically starting a thread on sdl_audio’s improvment.

Something…specific in mind?

–ryan.


The more I know about the WIN32 API the more I dislike it. It is complex and
for the most part poorly designed, inconsistent, and poorly documented.
- David Korn

Yeah but… breaking programs is… um… "l4m3"On 14-Jul-2001, Olivier Dagenais wrote:

No problem, this is fine if it is going to be part of 1.3/2.0…
(hint, hint)

Olivier A. Dagenais - Software Architect and Developer

“Patrick McFarland” wrote in message
news:20010714233907.F253 at raptorcomp.panax.com

It quite makes sence. Yes, I had already planned to hack apart
sdl_mixer.
Put the mixer in sdl_audio, and put everything else into sdl_music
(a
seperate library) but this has the capibility of breaking older
programs
that use sdl_mixer for mixing.

On 14-Jul-2001, Olivier Dagenais wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.
I don’t have any problems with it.
Perhaps you should list some ideas of yours, and we can all
discuss

them.

I didn’t start the thread, but here goes:

Although I know computer graphics programming, I do not know
computer

sound programming. This means if you give me two surfaces as
input

and ask me to create a third surface as output with the two
blended

together, I’ll know what to do. Give me two sound streams and ask
me

the same thing, I won’t know what to do, and I’ll want to use
someone

else’s mixing routine.

I’d prefer to have the mixing stuff done by SDL and have the
specialized loaders and players in external libraries, able to use
the

internal SDL mixing routines, effectively putting half of what I
understand SDL_Mixer to do in the core of SDL.

Does that make sense?


Olivier A. Dagenais - Software Architect and Developer

Yep, im planing on to, anyhow. With internal sequencer support, too.On 15-Jul-2001, Nathan Hand wrote:

Add support for midi-out.

On Sat, Jul 14, 2001 at 08:10:33PM -0500, Patrick McFarland wrote:

Well, thats why I started this thread.
Im gonna improve it, but I want everyone’s thoughts first.

On 14-Jul-2001, Ryan C. Gordon wrote:

Hey, Im offically starting a thread on sdl_audio’s improvment.

Something…specific in mind?

–ryan.


The more I know about the WIN32 API the more I dislike it. It is complex and
for the most part poorly designed, inconsistent, and poorly documented.
- David Korn

Speaking of MIDI, I just had an idea… :slight_smile:

I could use a MIDI file loader + player that can drive a "plugin"
soft synth provided by the application, rather than any synth engine
built into SDL_mixer/SDL_music or whatever. Basic design: Queue or
buffer of MIDI events + timestamps and an audio buffer should be
passed to a callback function. The callback function parses the MIDI
data, fills in the audio buffer and returns. It is not specified
whether the “synth plugin” runs as a direct callback from SDL_audio,
or in a separate thread; the latter would be preferable for normal
use, as additional buffering would reduce the reliability without
impacting real time sfx latency.

Yeah, I feel like hacking a synth engine (or rather building one from
various crap I have lying around…), but I don’t have a MIDI file
parser. (Hints? Where would I find some nice and clean LGPLed code?)

Wonder why I haven’t gotten around to do this before - it’s
definitely the easiest way to get going with music + custom synth
code! Just use your favourite MIDI sequencer software hooked up to
the same synth plugin running in a MIDI-in driven real time host…

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Sunday 15 July 2001 06:55, Nathan Hand wrote:

Add support for midi-out.

Ahh, you mean like directx’s little software midi thing?
I was thinking about that anyhow.On 15-Jul-2001, David Olofson wrote:

On Sunday 15 July 2001 06:55, Nathan Hand wrote:

Add support for midi-out.

Speaking of MIDI, I just had an idea… :slight_smile:

I could use a MIDI file loader + player that can drive a "plugin"
soft synth provided by the application, rather than any synth engine
built into SDL_mixer/SDL_music or whatever. Basic design: Queue or
buffer of MIDI events + timestamps and an audio buffer should be
passed to a callback function. The callback function parses the MIDI
data, fills in the audio buffer and returns. It is not specified
whether the “synth plugin” runs as a direct callback from SDL_audio,
or in a separate thread; the latter would be preferable for normal
use, as additional buffering would reduce the reliability without
impacting real time sfx latency.

Yeah, I feel like hacking a synth engine (or rather building one from
various crap I have lying around…), but I don’t have a MIDI file
parser. (Hints? Where would I find some nice and clean LGPLed code?)

Wonder why I haven’t gotten around to do this before - it’s
definitely the easiest way to get going with music + custom synth
code! Just use your favourite MIDI sequencer software hooked up to
the same synth plugin running in a MIDI-in driven real time host…

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’

Patrick McFarland wrote:

Im gonna improve it, but I want everyone’s thoughts first.

No-latency sound for Windows (and any other OS’es which
might have this problem).

To put it plainly - what SDL needs is to be able to play
mulitple sound effects at the same time (the number of
channels on the soundcard?) and so that they start
immediately after a “playsound” command is called.

Streaming audio is just not good enough for soundeffects
in a realtime game on an OS with a long delay because of
the stream-buffer…

Just some DKR 0.02 :slight_smile:

Cheers–
http://www.HardcoreProcessing.com

that should be relitivly easy actually.
directsound natually is low latency, and oss and alsa tend to be (as in not always)
but I can only do what the os allows me to do.On 16-Jul-2001, Anoq of the Sun wrote:

Patrick McFarland wrote:

Im gonna improve it, but I want everyone’s thoughts first.

No-latency sound for Windows (and any other OS’es which
might have this problem).

To put it plainly - what SDL needs is to be able to play
mulitple sound effects at the same time (the number of
channels on the soundcard?) and so that they start
immediately after a “playsound” command is called.

Streaming audio is just not good enough for soundeffects
in a realtime game on an OS with a long delay because of
the stream-buffer…

Just some DKR 0.02 :slight_smile:

Cheers

http://www.HardcoreProcessing.com

Patrick McFarland wrote:

that should be relitivly easy actually.
directsound natually is low latency, and oss and alsa tend to be (as in not always)
but I can only do what the os allows me to do.

Yes - but it would be nice if one only had to use the SDL APIs,
and not mess with the OS APIs :slight_smile:

  • even though I already have written (but not tested)
    most of a Direct Sound audio system… I can send that
    code to anybody who is interested… (just notify me
    by private e-mail)

Also, I need a HWND
for the SDL-Window to make it focus the sound to the window,
and I can’t get that unless I also implement some other
change to SDL to allow me to get access to the HWND.

But we already discussed the design of such changes
to SDL on this list earlier :slight_smile:

Cheers–
http://www.HardcoreProcessing.com

yeah, os apis are evil and wrong and immoral. It seems, tho, my project might
be dead before it gets off the ground. According to a certain sdl higher up
sdl 2.0 will use opneal…On 16-Jul-2001, Anoq of the Sun wrote:

Patrick McFarland wrote:

that should be relitivly easy actually.
directsound natually is low latency, and oss and alsa tend to be (as in not always)
but I can only do what the os allows me to do.

Yes - but it would be nice if one only had to use the SDL APIs,
and not mess with the OS APIs :slight_smile:

  • even though I already have written (but not tested)
    most of a Direct Sound audio system… I can send that
    code to anybody who is interested… (just notify me
    by private e-mail)

Also, I need a HWND
for the SDL-Window to make it focus the sound to the window,
and I can’t get that unless I also implement some other
change to SDL to allow me to get access to the HWND.

But we already discussed the design of such changes
to SDL on this list earlier :slight_smile:

Cheers

http://www.HardcoreProcessing.com

yeah, os apis are evil and wrong and immoral. It seems, tho, my project might
be dead before it gets off the ground. According to a certain sdl higher up
sdl 2.0 will use opneal…

(Sam, if you care to elaborate, it’d be appreciated…)

When you say, “will use openal”, do you mean:

  1. “Will use OpenAL as a backend for the SDL audio API, throwing away the
    other backends, since OpenAL has backends of its own”

  2. “Will use OpenAL in the same way that you can use OpenGL right now”;
    i.e. - that is, support for it is integrated into SDL, but, by and large,
    it’s a separate library, and it’s integration is a feature in the same way
    OpenGL is; you can have a GL surface if you want, or just a linear
    framebuffer, otherwise.

  3. “Will use only OpenAL, and bugger all if you just need a simple, 2D
    audio stream”

?

–ryan.

yeah, os apis are evil and wrong and immoral. It seems, tho, my project might
be dead before it gets off the ground. According to a certain sdl higher up
sdl 2.0 will use opneal…

(Sam, if you care to elaborate, it’d be appreciated…)

I haven’t actually said anything about this. In fact, having used both
SDL and OpenAL, I would say that OpenAL is a great API for 3D audio needs,
but SDL audio is much more lightweight.

I don’t plan to have SDL use OpenAL as an audio back end.

See ya,
-Sam Lantinga, Lead Programmer, Loki Software, Inc.

heh, good. OpenAL == bad evil immoral and wrong. or something like that.On 17-Jul-2001, Sam Lantinga wrote:

yeah, os apis are evil and wrong and immoral. It seems, tho, my project might
be dead before it gets off the ground. According to a certain sdl higher up
sdl 2.0 will use opneal…

(Sam, if you care to elaborate, it’d be appreciated…)

I haven’t actually said anything about this. In fact, having used both
SDL and OpenAL, I would say that OpenAL is a great API for 3D audio needs,
but SDL audio is much more lightweight.

I don’t plan to have SDL use OpenAL as an audio back end.

See ya,
-Sam Lantinga, Lead Programmer, Loki Software, Inc.