Realtime audio

Hello!

I’ve been reading the discussion about low-latency
audio - and I think there is a serious need for
this on platforms like Windows. And currently available
SDL games even suffer from this problem! (For instance
the games I wrote and some other SDL games I’ve tried).

I’m wondering if it wouldn’t be a good thing that SDL
got added support for low-latency audio?

At some point I will most likely come to need
this very badly for Windows :slight_smile:

I have been considering to write directly to
DirectSound and doing something like this:

*) Separate “foreground” (low-latency) sound and
"background" (less time-critical) sound.
*) The foreground sound could be implemented
on one DirectSound buffer by just
stopping whatever sound is playing and
starting a new sound.

It will only allow
one “foreground” sound at the time, but
this would also be adequate for many cases - like:

a) The audio for an MPEG video stream
b) Click-sounds in a user interface.
c) The shooting-sounds in a first person
3D shoot 'em up game
(like doom/quake/unreal/halflife etc.)

Also, I looked at the MAIA website - and it
seems interesting - worth “admaiaring” I guess :wink:
But it also looks like a very new project -
and it does not currently seem to aim for Windows…

I would love to hear what people think about
these audio-problems - and the future of
audio-support in SDL :slight_smile:

Cheers–
http://www.HardcoreProcessing.com

Hello!

I’ve been reading the discussion about low-latency
audio - and I think there is a serious need for
this on platforms like Windows. And currently available
SDL games even suffer from this problem! (For instance
the games I wrote and some other SDL games I’ve tried).

I’m wondering if it wouldn’t be a good thing that SDL
got added support for low-latency audio?

Sure, but you have to be aware that most platforms (including Windows) can’t
deliver anything like the scheduling timing required for low latency audio,
regardless of APIs and application hacks.

Still, supporting the shared memory APIs provided on most of the major
platforms can help the situation a great deal, at least for less critical
applications.

I have been considering to write directly to
DirectSound and doing something like this:

*) Separate “foreground” (low-latency) sound and
"background" (less time-critical) sound.

Right; that’s the way to do it.

*) The foreground sound could be implemented
on one DirectSound buffer by just
stopping whatever sound is playing and
starting a new sound.

I’m not sure this is possible to do without a card with hardware multichannel
mixing, such as the SB Live!. It’s possible in theory, but I’m not sure
it’s implemented properly for all cards in DirectSound.

It will only allow
one “foreground” sound at the time, but
this would also be adequate for many cases - like:

a) The audio for an MPEG video stream
b) Click-sounds in a user interface.
c) The shooting-sounds in a first person
3D shoot 'em up game
(like doom/quake/unreal/halflife etc.)

I think it’s way too limited to warrant the effort of implementing such a
system.

However, doing it right improves things a great deal - and will work on
OSS, OSS/Free, ALSA, DirectSound, ASIO, EASI and some other audio APIs as
well. :slight_smile:

All you do is open the device once, in direct/shared memory/ mode, and then manage the timing and
mixing chunk size yourself. All you need is access to the DMA buffer and a
way to read back the current play position, to stay in sync.

Highly buffered streams are mixed into the DMA buffer far ahead of the
current play position, while real time effects are mixed in closer to the
play position.

Sound effects with very low start latency can be achieved by starting to mix
right in front of the current play position, and then mix a large chunk right
away, effectivey switching from “low latency mode” to “highly buffered mode”.

Also, I looked at the MAIA website - and it
seems interesting - worth “admaiaring” I guess :wink:
But it also looks like a very new project -

Actually a very old project, but it has been problematic figuring out how to
design something that would be useful to more than a single application, or
for very simple things only. It has to be simple, clean, flexible, powerful,
efficient and easy to use - all at the same time, or it just won’t work.

and it does not currently seem to aim for Windows…

Frankly, I don’t care much about Windows as a target for MAIA, as MAIA is
mainly intended for high end multimedia applications, with “real time” and
"reliabality" being the two major requirements that aren’t possible to
satisfy on the Windows platform.

(Right, I know Win2k kernel streams are a lot better than anything seen on
Windows before, but hey - I was just able to move out of Linux kernel
space, thanks to the lowlatency patch. I’m not going back, and certainly not
into the Win2k kernel.)

Anyway, MAIA is just an API, and doesn’t depend on Linux, x86 CPUs, POSIX or
any subsystems available for Linux only, so it should port rather easilly to
just about any platform. I’ve even considered 16 bit MCUs as possible targets

  • and 64 bit CPUs are rather obvious, I think. :slight_smile:

Back to Windows and low latency audio: Obviously (?), MAIA can’t break the
laws of nature, and thus cannot “fix” the Windows latency problems. It
doesn’t even address soft real time hacks such as the one described above, so
it wouldn’t be of much help. (Unless you need a huge dynamic network of
plugins, or something, of course - that’s what it’s for, basically.)

MAIA is basically just the Free/Open Source version of VST 2.0 + ReWire.

I would love to hear what people think about
these audio-problems - and the future of
audio-support in SDL :slight_smile:

Well, I can write up an SDL API proposal for multiple channels with different

  • variable buffering, but I don’t think I’ll get around to implement it in a
    good while… And then, I’d probably only implement it for OSS and/or ALSA.
    Anyone who knows enough about DirectSound to port it?

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 22 March 2001 23:45, Anoq of the Sun wrote:

David Olofson wrote:

Still, supporting the shared memory APIs provided on most of the major
platforms can help the situation a great deal, at least for less critical
applications.

OK…

I’m not sure this is possible to do without a card with hardware multichannel
mixing, such as the SB Live!. It’s possible in theory, but I’m not sure
it’s implemented properly for all cards in DirectSound.

:frowning:

However, doing it right improves things a great deal - and will work on
OSS, OSS/Free, ALSA, DirectSound, ASIO, EASI and some other audio APIs as
well. :slight_smile:

All you do is open the device once, in direct/shared memory/ mode, and then manage the timing and
mixing chunk size yourself. All you need is access to the DMA buffer and a
way to read back the current play position, to stay in sync.

Highly buffered streams are mixed into the DMA buffer far ahead of the
current play position, while real time effects are mixed in closer to the
play position.

Sound effects with very low start latency can be achieved by starting to mix
right in front of the current play position, and then mix a large chunk right
away, effectivey switching from “low latency mode” to “highly buffered mode”.

OK thanks…

Actually a very old project, but it has been problematic figuring out how to
design something that would be useful to more than a single application, or
for very simple things only. It has to be simple, clean, flexible, powerful,
efficient and easy to use - all at the same time, or it just won’t work.

and it does not currently seem to aim for Windows…

Frankly, I don’t care much about Windows as a target for MAIA, as MAIA is
mainly intended for high end multimedia applications, with “real time” and
"reliabality" being the two major requirements that aren’t possible to
satisfy on the Windows platform.

(Right, I know Win2k kernel streams are a lot better than anything seen on
Windows before, but hey - I was just able to move out of Linux kernel
space, thanks to the lowlatency patch. I’m not going back, and certainly not
into the Win2k kernel.)

Anyway, MAIA is just an API, and doesn’t depend on Linux, x86 CPUs, POSIX or
any subsystems available for Linux only, so it should port rather easilly to
just about any platform. I’ve even considered 16 bit MCUs as possible targets

  • and 64 bit CPUs are rather obvious, I think. :slight_smile:

Back to Windows and low latency audio: Obviously (?), MAIA can’t break the
laws of nature, and thus cannot “fix” the Windows latency problems. It
doesn’t even address soft real time hacks such as the one described above, so
it wouldn’t be of much help. (Unless you need a huge dynamic network of
plugins, or something, of course - that’s what it’s for, basically.)

OK - thanks for your info - I guess I shouldn’t consider MAIA for this
then :slight_smile:

Well, I can write up an SDL API proposal for multiple channels with different

  • variable buffering, but I don’t think I’ll get around to implement it in a
    good while… And then, I’d probably only implement it for OSS and/or ALSA.
    Anyone who knows enough about DirectSound to port it?

Well, I can say that if at some point I’ll come to need low-latency
audio
badly enough on Windows, I’m sure that it also means that I’ll have the
ressources to implement it. And I’ve coded for DirectSound before (once
upon a time in a company where I worked). So, if it comes to that, I may
be very interested in doing the port :slight_smile:

So, right now I just need to find some company who needs to have
a complete CD-ROM production done or something :wink:

Cheers–
http://www.HardcoreProcessing.com

Back to Windows and low latency audio: Obviously (?), MAIA can’t break
the laws of nature, and thus cannot “fix” the Windows latency problems.
It doesn’t even address soft real time hacks such as the one described
above, so it wouldn’t be of much help. (Unless you need a huge dynamic
network of plugins, or something, of course - that’s what it’s for,
basically.)

OK - thanks for your info - I guess I shouldn’t consider MAIA for this
then :slight_smile:

Probably not; it’s a plugin and multimedia streaming (or “application
integration”) API, rather than a driver API or audio API layer. (That is,
it’s an alternative to DirectX plugins, VST and ReWire, rather than
DirectSound, ASIO and EASI.)

[…]

So, right now I just need to find some company who needs to have
a complete CD-ROM production done or something :wink:

Well, some people hack away just for fun. :wink:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 23 March 2001 12:47, Anoq of the Sun wrote: