SDL_Point members do not have specified width

this forces me to have my code depend on the SDL_Point type or perform a conversion from int16_t to int in order to create an array of SDL_Point out of my array of generic points.

is there a reason SDL_Point is declared using a platform-variable type? can this ever be changed?

Most of the SDL code uses int. At compile time the width can be
determined though…On Fri, Sep 27, 2013 at 12:31 PM, carl <carl.lefrancois at gmail.com> wrote:

this forces me to have my code depend on the SDL_Point type or perform a
conversion from int16_t to int in order to create an array of SDL_Point out
of my array of generic points.

is there a reason SDL_Point is declared using a platform-variable type? can
this ever be changed?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I’m not sure I understand the question here. SDL_Point is defined
simply as a pair of ints for X and Y. This is appropriate because
whatever width your platform uses for int is a/the most efficient for
the CPU to handle?assuming your compiler is even remotely good
at doing its job, anyway. :slight_smile:

We tend to use specific sizes and byte orders for standardization and
conservation. Common file formats, network protocols, and just to
save HD space and sometimes RAM.

An on-disk format should always use a known integer size (often the
smallest appropriate) and an explicit byte order. That makes your
games portable from common x86/x86_64 processors (LE, 32 and 64 bit)
used on desktops to ARM processors (LE, usually 32 bit) on mobile
devices to Power/Cell processors (BE, 32/64 bit) used on game
consoles while economizing on space.

If your game is using SDL for rendering, then yeah, you should be
using SDL_Point’s in your code, because these are the most efficient
means of handing stuff to SDL. But if you know you only need int16_t
for disk or network (LE or BE?) then send those. Plus, I’m loading
data only at load time, and I’m drawing a lot more on the screen than
I’m sending over a network.

This does make for a little more effort if you’re porting something
to SDL that was written without it. But the resulting port will work
many more places than the original code would have, which is kind of
the point.

JosephOn Fri, Sep 27, 2013 at 04:31:30PM +0000, carl wrote:

this forces me to have my code depend on the SDL_Point type or perform a conversion from int16_t to int in order to create an array of SDL_Point out of my array of generic points.

is there a reason SDL_Point is declared using a platform-variable type? can this ever be changed?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Joseph Carter wrote:

If your game is using SDL for rendering, then yeah, you should be
using SDL_Point’s in your code

that’s just it, i would like to use SDL2 as one possible renderer in my code.

you make very good points. SDL aims for maximum speed, and you just can’t do that without getting machine specific. my goal is to be as machine agnostic as possible, so i was a bit confused when i realised i needed to pass a pointer to an array of an element of unknown size. luckily SDL’s development model is about as open as possible so we can discuss all these possibilities :slight_smile:

i think you answered my first question and the answer to my second question is it could change, but that wouldn’t make sense with respect to SDL’s goals.

the solution is for me to do per-platform integer width detection and adapt the user’s data to the correct size in order to profit from SDL2 as a renderer. it only makes sense that my app would take the hit since it’s my app that has cross-platform support as its goal. besides, data transformation can be precalculated since the platform doesn’t change very often :slight_smile:

sizeof(int)On Sep 30, 2013 6:43 PM, “carl” <carl.lefrancois at gmail.com> wrote:

**

Joseph Carter wrote:

If your game is using SDL for rendering, then yeah, you should be
using SDL_Point’s in your code

that’s just it, i would like to use SDL2 as one possible renderer in my
code.

you make very good points. SDL aims for maximum speed, and you just can’t
do that without getting machine specific. my goal is to be as machine
agnostic as possible, so i was a bit confused when i realised i needed to
pass a pointer to an array of an element of unknown size. luckily SDL’s
development model is about as open as possible so we can discuss all these
possibilities [image: Smile]

i think you answered my first question and the answer to my second
question is it could change, but that wouldn’t make sense with respect to
SDL’s goals.

the solution is for me to do per-platform integer width detection and
adapt the user’s data to the correct size in order to profit from SDL2 as a
renderer. it only makes sense that my app would take the hit since it’s my
app that has cross-platform support as its goal. besides, data
transformation can be precalculated since the platform doesn’t change very
often [image: Smile]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi,

I found this thread from 2010

http://forums.libsdl.org/viewtopic.php?t=5682

Did anything happen?–
Patrick Shirkey
Boost Hardware Ltd

Nope.
Bugzilla entry? :slight_smile:
VittorioOn Tuesday, October 1, 2013, Patrick Shirkey wrote:

Hi,

I found this thread from 2010

http://forums.libsdl.org/viewtopic.php?t=5682

Did anything happen?


Patrick Shirkey
Boost Hardware Ltd


SDL mailing list
SDL at lists.libsdl.org <javascript:;>
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Nope.
Bugzilla entry? :slight_smile:
Vittorio

I’m happy to revisit the patch if there is a liklihood of support. It
might assist Valve with their audio latency issues too.On Wed, October 2, 2013 12:20 pm, Vittorio Giovara wrote:

On Tuesday, October 1, 2013, Patrick Shirkey wrote:

Hi,

I found this thread from 2010

http://forums.libsdl.org/viewtopic.php?t=5682

Did anything happen?


Patrick Shirkey
Boost Hardware Ltd


SDL mailing list
SDL at lists.libsdl.org <javascript:;>
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Patrick Shirkey
Boost Hardware Ltd

Nope.
Bugzilla entry? :slight_smile:
Vittorio

I’m happy to revisit the patch if there is a liklihood of support. It
might assist Valve with their audio latency issues too.
[…]

Well, FWIW, JACK is the only option if you want proper low latency
audio with the ability to route audio between applications etc - but
running it together with PulseAudio is still awful, and the ALSA
wrapper adds compatibility issues on top of that. I’ve managed to get
it to work most of the time, for getting sound in browsers and
"normal" applications while still running JACK underneath, but there’s
massive latency, and some plugins and stuff just won’t work for
reasons unknown.

So, yeah, JACK support in SDL seems like a really rather nice idea. :slight_smile:

As it is, I’ve given up on all that on Linux, and offer native JACK
support in my sound engine, in addition to SDL audio. I’m loading the
JACK library dynamically, so the application still loads if JACK isn’t
installed.On Wed, Oct 2, 2013 at 8:23 AM, Patrick Shirkey wrote:

On Wed, October 2, 2013 12:20 pm, Vittorio Giovara wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not routing
audio between applications?On Wed, Oct 2, 2013 at 9:09 AM, David Olofson wrote:

On Wed, Oct 2, 2013 at 8:23 AM, Patrick Shirkey wrote:

On Wed, October 2, 2013 12:20 pm, Vittorio Giovara wrote:

Nope.
Bugzilla entry? :slight_smile:
Vittorio

I’m happy to revisit the patch if there is a liklihood of support. It
might assist Valve with their audio latency issues too.
[…]

Well, FWIW, JACK is the only option if you want proper low latency
audio with the ability to route audio between applications etc - but
running it together with PulseAudio is still awful, and the ALSA
wrapper adds compatibility issues on top of that. I’ve managed to get
it to work most of the time, for getting sound in browsers and
"normal" applications while still running JACK underneath, but there’s
massive latency, and some plugins and stuff just won’t work for
reasons unknown.

So, yeah, JACK support in SDL seems like a really rather nice idea. :slight_smile:

As it is, I’ve given up on all that on Linux, and offer native JACK
support in my sound engine, in addition to SDL audio. I’m loading the
JACK library dynamically, so the application still loads if JACK isn’t
installed.


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not
routing
audio between applications?

It depends on what you are trying to achieve.

Using the JACK API means the developer doesn’t have to “worry” about a lot
of the issues associated with high performance professional quality audio
as they have been dealt with in the API and JACK server manages the whole
shebang.

Also if you go direct to ALSA your app will inevitably end up being routed
through Pulse Audio for the vast majority of users so that adds additional
latency. If the user then happens to be someone who uses JACK that means
your audio stream is potentially being routed through alsa -> PA - Jack so
you can see there are an additional two layers between the application and
the audio output. Those layers adds latency and complexity. While Linux
Audio Developers are doing their best to make the entire platform as
robust as possible it is still a faster processing graph if the user goes
directly through JACK when JACk is running.

Of course there is no imperative to use JACK if you are not seeking high
performance audio and/or you are sure that your application will be able
to take complete control of the ALSA layer and bypass both PA and JACK in
the process. For example if you are running an embedded console type
application. However you may find that it is a lot less hassle to use the
JACK API for high performance audio than to reinvent the wheel by going
directly through ALSA. At least that’s what all the professional open
source audio developers do these days. JACK is the defacto standard for
open source audio and many 3d/multimedia applications too.

There is nothing to stop the gaming community adopting JACK too. In fact
there are some potential benefits from that as there are projects underway
to provide realtime 3d data (jack3d) and video frame data (jack-video) via
the jack api too. There is probably some other data that could be shared
via jack which is useful for inter app gaming.On Thu, October 3, 2013 4:28 pm, Sam Lantinga wrote:


Patrick Shirkey
Boost Hardware Ltd

On Wed, Oct 2, 2013 at 9:09 AM, David Olofson wrote:

On Wed, Oct 2, 2013 at 8:23 AM, Patrick Shirkey <@Patrick_Shirkey> wrote:

On Wed, October 2, 2013 12:20 pm, Vittorio Giovara wrote:

Nope.
Bugzilla entry? :slight_smile:
Vittorio

I’m happy to revisit the patch if there is a liklihood of support. It
might assist Valve with their audio latency issues too.
[…]

Well, FWIW, JACK is the only option if you want proper low latency
audio with the ability to route audio between applications etc - but
running it together with PulseAudio is still awful, and the ALSA
wrapper adds compatibility issues on top of that. I’ve managed to get
it to work most of the time, for getting sound in browsers and
"normal" applications while still running JACK underneath, but there’s
massive latency, and some plugins and stuff just won’t work for
reasons unknown.

So, yeah, JACK support in SDL seems like a really rather nice idea. :slight_smile:

As it is, I’ve given up on all that on Linux, and offer native JACK
support in my sound engine, in addition to SDL audio. I’m loading the
JACK library dynamically, so the application still loads if JACK isn’t
installed.


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Patrick Shirkey
Boost Hardware Ltd

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not routing
audio between applications?

Theoretically, yes - provided you’re actually talking directly to ALSA.

The problem with applications using the ALSA API directly is that with
JACK and/or PulseAudio running, they either fail to work at all
(device busy), or they get an ALSA wrapper instead of the real ALSA.
That tends to result in horrible latency, crashes and other issues, so
at best, it’s usable for media players and the like that use only the
most basic API features and substantial buffering.

The ALSA API isn’t meant to be implemented over sound daemons an the
like, so given that everyone insists on using one sound daemon or
another, I’m not sure there is any other realistic solution than
avoiding to use ALSA directly in normal applications…

From a technical POV, the only solution that can actually work is to
use ALSA for drivers only (never to be touched by applications), JACK
(or similar) as the core realtime capable low level API, and higher
level APIs, daemons etc running as JACK clients.

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/On Thu, Oct 3, 2013 at 8:28 AM, Sam Lantinga wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not
routing
audio between applications?

Theoretically, yes - provided you’re actually talking directly to ALSA.

The problem with applications using the ALSA API directly is that with
JACK and/or PulseAudio running, they either fail to work at all
(device busy), or they get an ALSA wrapper instead of the real ALSA.
That tends to result in horrible latency, crashes and other issues, so
at best, it’s usable for media players and the like that use only the
most basic API features and substantial buffering.

The ALSA API isn’t meant to be implemented over sound daemons an the
like, so given that everyone insists on using one sound daemon or
another, I’m not sure there is any other realistic solution than
avoiding to use ALSA directly in normal applications…

From a technical POV, the only solution that can actually work is to
use ALSA for drivers only (never to be touched by applications), JACK
(or similar) as the core realtime capable low level API, and higher
level APIs, daemons etc running as JACK clients.

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/

Not sure why you consider it a sorry state. There are lots of options,
why is that a bad thing?

The lowdown is if you want to support professional audio with low latency
use JACK and if you don’t care about low latency let Pulse Audio handle
the stream for you either by supporting ALSA directly or using the PA API
or GStreamer.

Anything else is just making life difficult for yourself and your users.
But if you have a reason to use any of the other options available then
they are conveniently there for you to use.

Both PA and JACK are available on every major distribution so it’s no
stress for users to be given that choice.

If you want things to be easier than that we would have to integrate both
PA and JACK into a single server and from the political, logistical and
resource perspective that is probably never going to happen.

If we compare the state of Linux Audio to open source solutions for db
servers, web servers, email servers, video servers, etc… there is not
much difference.On Thu, October 3, 2013 5:32 pm, David Olofson wrote:

On Thu, Oct 3, 2013 at 8:28 AM, Sam Lantinga wrote:


Patrick Shirkey
Boost Hardware Ltd

sigh Wasn’t ALSA supposed to fix all of that crap when it replaced
OSS? :slight_smile: Nah, JACK is pretty useful. It’s what EsounD/aRts should
have been, and PulseAudio just looks like multimedia sound API by
committee. No wonder that ain’t working out well. :wink:

Plus JACK is now used much more widely than just the Linux audio
world. It gets used on the Mac for some professional audio stuff,
particularly if it’s available for Windows and Mac. I’ve also seen
it used on the iPhone, though as a client.

I don’t use it myself, but if I ran a Linux setup with a sound driver
that was not multi-open capable (again, wasn’t ALSA supposed to fix
that?), I’d definitely install JACK and make the various DEs connect
to it, either directly or via whatever server they require.

IMO, JACK is about as close to CoreAudio as you’re going to see on a
platform where everyone does their own thing.

JosephOn Thu, Oct 03, 2013 at 09:32:13AM +0200, David Olofson wrote:

On Thu, Oct 3, 2013 at 8:28 AM, Sam Lantinga wrote:

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not routing
audio between applications?

Theoretically, yes - provided you’re actually talking directly to ALSA.

The problem with applications using the ALSA API directly is that with
JACK and/or PulseAudio running, they either fail to work at all
(device busy), or they get an ALSA wrapper instead of the real ALSA.
That tends to result in horrible latency, crashes and other issues, so
at best, it’s usable for media players and the like that use only the
most basic API features and substantial buffering.

The ALSA API isn’t meant to be implemented over sound daemons an the
like, so given that everyone insists on using one sound daemon or
another, I’m not sure there is any other realistic solution than
avoiding to use ALSA directly in normal applications…

From a technical POV, the only solution that can actually work is to
use ALSA for drivers only (never to be touched by applications), JACK
(or similar) as the core realtime capable low level API, and higher
level APIs, daemons etc running as JACK clients.

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

sigh Wasn’t ALSA supposed to fix all of that crap when it replaced
OSS? :slight_smile:

Not exactly. ALSA is what OSS would have been if Sampo had not taken it
private.

Nah, JACK is pretty useful. It’s what EsounD/aRts should
have been, and PulseAudio just looks like multimedia sound API by
committee. No wonder that ain’t working out well. :wink:

PA was originally sponsored by Redhat and is now being driven mostly by
Intel. It is the default sound server for all the major distros and
several mobile OS’s too. Sure there are still some bugs but they can only
be fixed if they are reported.

Plus JACK is now used much more widely than just the Linux audio
world. It gets used on the Mac for some professional audio stuff,
particularly if it’s available for Windows and Mac. I’ve also seen
it used on the iPhone, though as a client.

You are correct. JACK is supported on Linux, OSX, iOS, Windows, Freebsd.
Unfortunately the Android and ChromeOS Teams have chosen to make it
technically impossible to run JACK on a standard Android build but they
did borrow significant amounts of the JACK codebase to write their own
versions of low latency audio servers for each platform. They just left
out all the good stuff like inter app comms. midi, networking, session
management. We still hold out hope that one day they will change their
policy on Android at least.

I don’t use it myself, but if I ran a Linux setup with a sound driver
that was not multi-open capable (again, wasn’t ALSA supposed to fix
that?), I’d definitely install JACK and make the various DEs connect
to it, either directly or via whatever server they require.

IMO, JACK is about as close to CoreAudio as you’re going to see on a
platform where everyone does their own thing.

IMO, the combination of PA + JACK provides a more powerful system than
coreaudio and it is a cross platform solution too.On Thu, October 3, 2013 6:22 pm, T. Joseph Carter wrote:

Joseph

On Thu, Oct 03, 2013 at 09:32:13AM +0200, David Olofson wrote:

On Thu, Oct 3, 2013 at 8:28 AM, Sam Lantinga wrote:

"ALSA is used as a backend by JACK, not the other way round. "

Does that mean it’s better to write to ALSA directly, if you’re not
routing
audio between applications?

Theoretically, yes - provided you’re actually talking directly to ALSA.

The problem with applications using the ALSA API directly is that with
JACK and/or PulseAudio running, they either fail to work at all
(device busy), or they get an ALSA wrapper instead of the real ALSA.
That tends to result in horrible latency, crashes and other issues, so
at best, it’s usable for media players and the like that use only the
most basic API features and substantial buffering.

The ALSA API isn’t meant to be implemented over sound daemons an the
like, so given that everyone insists on using one sound daemon or
another, I’m not sure there is any other realistic solution than
avoiding to use ALSA directly in normal applications…

From a technical POV, the only solution that can actually work is to
use ALSA for drivers only (never to be touched by applications), JACK
(or similar) as the core realtime capable low level API, and higher
level APIs, daemons etc running as JACK clients.

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Patrick Shirkey
Boost Hardware Ltd

[…]
[…]

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/

Not sure why you consider it a sorry state. There are lots of options,
why is that a bad thing?

There are options, but they’re conflicting. Are you really supposed to
use different distros or manually switch audio stacks depending on
what you want to do?

In order to have (almost) everything work on the same system without
manual intervention, I’m running a stack of ALSA, JACK, PulseAudio and
an ALSA wrapper. It “sort of” works now, but there are still latency
issues with some applications, and some ALSA-only applications still
crash or don’t work at all.

Anything else is just making life difficult for yourself and your users.
But if you have a reason to use any of the other options available then
they are conveniently there for you to use.

Both PA and JACK are available on every major distribution so it’s no
stress for users to be given that choice.

Last I checked, JACK was only actually pre-installed on "studio"
distros, and when using JACK, PulseAudio wouldn’t work at all without
manual reconfiguration. Hopefully, that has changed at least…

If you want things to be easier than that we would have to integrate both
PA and JACK into a single server and from the political, logistical and
resource perspective that is probably never going to happen.

I don’t see why that should be necessary. I’m not sure what kind of
weirdness PA is doing, but I really can’t see any technical reasons
why it should have orders of magnitude more latency when running over
JACK as opposed to ALSA.

If we compare the state of Linux Audio to open source solutions for db
servers, web servers, email servers, video servers, etc… there is not
much difference.

I’m not sure that’s a good thing in this context… Are we targeting
gamers or sysadmins here?On Thu, Oct 3, 2013 at 10:02 AM, Patrick Shirkey wrote:

On Thu, October 3, 2013 5:32 pm, David Olofson wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

[…]
[…]

However, given the sorry state Linux audio is in, all you can do as an
application/library developer, is support every audio API you
realistically can, and hope the end user can make it work one way or
another. :-/

Not sure why you consider it a sorry state. There are lots of options,
why is that a bad thing?

There are options, but they’re conflicting. Are you really supposed to
use different distros or manually switch audio stacks depending on
what you want to do?

In order to have (almost) everything work on the same system without
manual intervention, I’m running a stack of ALSA, JACK, PulseAudio and
an ALSA wrapper. It “sort of” works now, but there are still latency
issues with some applications, and some ALSA-only applications still
crash or don’t work at all.

On my Debian system I just have PA and JACK. I have yet to encounter any
issues with audio using this combination. Although I am probably not a
good example because I already know how everything is supposed to work and
can fix things with my eyes shut if they do go wrong. Usually that only
happens when I am trying to do something that no one else has tried
before…

The one concession is that I have disabled autospawn with PA which tends
to be on by default and can get in the way because IMO it has not received
complete usability testing yet in regards to the combination of PA + JACK.

Anything else is just making life difficult for yourself and your users.
But if you have a reason to use any of the other options available then
they are conveniently there for you to use.

Both PA and JACK are available on every major distribution so it’s no
stress for users to be given that choice.

Last I checked, JACK was only actually pre-installed on "studio"
distros, and when using JACK, PulseAudio wouldn’t work at all without
manual reconfiguration. Hopefully, that has changed at least…

It depends on the distro, Ubuntu has been notoriously bad at implementing
PA support until a couple of years back when they cleaned house and hired
a new audio lead. Now David Henningson is very active in contributing to
PA development so that issue is fixed. There have been occasional new
usability bugs introduced but that happens with any actively developed
system.

If you want things to be easier than that we would have to integrate
both
PA and JACK into a single server and from the political, logistical and
resource perspective that is probably never going to happen.

I don’t see why that should be necessary. I’m not sure what kind of
weirdness PA is doing, but I really can’t see any technical reasons
why it should have orders of magnitude more latency when running over
JACK as opposed to ALSA.

It comes down to stability issues in the PA graph which are still being
debugged/traced and the PA Stream Buffer which adds a minimum of 10ms
latency at 64 frames/period.

If we compare the state of Linux Audio to open source solutions for db
servers, web servers, email servers, video servers, etc… there is not
much difference.

I’m not sure that’s a good thing in this context… Are we targeting
gamers or sysadmins here?

Is there any real difference ;-POn Thu, October 3, 2013 7:46 pm, David Olofson wrote:

On Thu, Oct 3, 2013 at 10:02 AM, Patrick Shirkey <@Patrick_Shirkey> wrote:

On Thu, October 3, 2013 5:32 pm, David Olofson wrote:


Patrick Shirkey
Boost Hardware Ltd

[…]

It comes down to stability issues in the PA graph which are still being
debugged/traced and the PA Stream Buffer which adds a minimum of 10ms
latency at 64 frames/period.

I see… So, it’s basically a code issue rather than a strictly
technical one. And the Stream Buffer is a part of the design that
applications can’t avoid? Makes perfect sense for most things except
musical applications and games - which, IMNSHO, should be using JACK
directly instead. :slight_smile:

[…]

I’m not sure that’s a good thing in this context… Are we targeting
gamers or sysadmins here?

Is there any real difference :stuck_out_tongue_winking_eye:

Not much, when it comes to typical hardcore gamers… :smiley:

The others should probably just get a pre-configured SteamBox. ;-)On Thu, Oct 3, 2013 at 12:17 PM, Patrick Shirkey wrote:


//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

sigh Wasn’t ALSA supposed to fix all of that crap when it replaced
OSS? :slight_smile:

Not exactly. ALSA is what OSS would have been if Sampo had not taken it
private.

But unlike OSS, ALSA is not implemented outside of Linux is it? And
there was mention of “Device or resource busy” using ALSA? That’s
supposed to be one of the things that just doesn’t happen with ALSA.

I can see how with things like PA providing emulation that those may
fail to support multiple users on an emulated interface in a graceful
manner. In which case, I might suggest that the emulated interface
should be considered an advanced feature to be used as a last ditch
effort to add network transparency to an application that doesn’t
know how to communicate with either it or JACK.

In this case, it is generally appropriate for PA and JACK to both
talk to ALSA (and probably directly, since neither one appears to be
trying to provide the other’s functionality. The similarities appear
to be quite superficial, from what I can see.

Nah, JACK is pretty useful. It’s what EsounD/aRts should
have been, and PulseAudio just looks like multimedia sound API by
committee. No wonder that ain’t working out well. :wink:

PA was originally sponsored by Redhat and is now being driven mostly by
Intel. It is the default sound server for all the major distros and
several mobile OS’s too. Sure there are still some bugs but they can only
be fixed if they are reported.

I guess my issue really is the multiple layers of abstraction and
indirection. To me, it’s like someone resurrecting SVGALib (now
that’s a blast from the past!) to port it to the Linux framebuffer
and then running X on top of SVGAlib! It’s madness.

Now, I quite clearly remember the GGI project being very proud of
being able to do crazy crap like that, but I also remember that it
was primarily an exercise in geek cred to have accomplished some
ridiculously hair-brained setup like that just to prove that GGI
would do it.

Plus JACK is now used much more widely than just the Linux audio
world. It gets used on the Mac for some professional audio stuff,
particularly if it’s available for Windows and Mac. I’ve also seen
it used on the iPhone, though as a client.

You are correct. JACK is supported on Linux, OSX, iOS, Windows, Freebsd.
Unfortunately the Android and ChromeOS Teams have chosen to make it
technically impossible to run JACK on a standard Android build but they
did borrow significant amounts of the JACK codebase to write their own
versions of low latency audio servers for each platform. They just left
out all the good stuff like inter app comms. midi, networking, session
management. We still hold out hope that one day they will change their
policy on Android at least.

Is what they use on Android at least an API subset as GLES is (in
theory) a subset of a corresponding OpenGL spec version? Or at
least, how that used to be the case until 3.x kind of blurred the
lines on that just a little?

I don’t use it myself, but if I ran a Linux setup with a sound driver
that was not multi-open capable (again, wasn’t ALSA supposed to fix
that?), I’d definitely install JACK and make the various DEs connect
to it, either directly or via whatever server they require.

IMO, JACK is about as close to CoreAudio as you’re going to see on a
platform where everyone does their own thing.

IMO, the combination of PA + JACK provides a more powerful system than
coreaudio and it is a cross platform solution too.

The nice thing about CoreAudio on the Mac is that generally all Mac
apps written to modern standards specifically for the Mac that do
anything substantive for the Mac use CoreAudio. That’s not the case
with JACK on Linux.

Of course the same can be said of Direct3D on Windows, and I still
prefer OpenGL as a standard, cross-platform implementation. It was
admittedly kind of hard to argue in favor of OpenGL for awhile there
simply because Direct3D represented the hardware and the trends in
how it was being used and OpenGL had a bunch of incompatible
extensions that were comparatively pretty lame. Apparently the
converse is true today. :slight_smile:

Bottom line though, it seems that supporting JACK is essential on
Linux because there simply is no single “standard” for Linux that
just works. And of the various competing standards that aren’t ALSA,
JACK is probably the best of them.

That’s my take, anyway. :slight_smile:

JosephOn Thu, Oct 03, 2013 at 06:51:05PM +1000, Patrick Shirkey wrote:

sigh Wasn’t ALSA supposed to fix all of that crap when it replaced
OSS? :slight_smile:

Not exactly. ALSA is what OSS would have been if Sampo had not taken it
private.

But unlike OSS, ALSA is not implemented outside of Linux is it? And
there was mention of “Device or resource busy” using ALSA??? That’s
supposed to be one of the things that just doesn’t happen with ALSA.

I can see how with things like PA providing emulation that those may
fail to support multiple users on an emulated interface in a graceful
manner. In which case, I might suggest that the emulated interface
should be considered an advanced feature to be used as a last ditch
effort to add network transparency to an application that doesn’t
know how to communicate with either it or JACK.

In this case, it is generally appropriate for PA and JACK to both
talk to ALSA (and probably directly, since neither one appears to be
trying to provide the other’s functionality. The similarities appear
to be quite superficial, from what I can see.

Nah, JACK is pretty useful. It’s what EsounD/aRts should
have been, and PulseAudio just looks like multimedia sound API by
committee. No wonder that ain’t working out well. :wink:

PA was originally sponsored by Redhat and is now being driven mostly by
Intel. It is the default sound server for all the major distros and
several mobile OS’s too. Sure there are still some bugs but they can only
be fixed if they are reported.

I guess my issue really is the multiple layers of abstraction and
indirection. To me, it’s like someone resurrecting SVGALib (now
that’s a blast from the past!) to port it to the Linux framebuffer
and then running X on top of SVGAlib! It’s madness.

Now, I quite clearly remember the GGI project being very proud of
being able to do crazy crap like that, but I also remember that it
was primarily an exercise in geek cred to have accomplished some
ridiculously hair-brained setup like that just to prove that GGI
would do it.

Plus JACK is now used much more widely than just the Linux audio
world. It gets used on the Mac for some professional audio stuff,
particularly if it’s available for Windows and Mac. I’ve also seen
it used on the iPhone, though as a client.

You are correct. JACK is supported on Linux, OSX, iOS, Windows, Freebsd.
Unfortunately the Android and ChromeOS Teams have chosen to make it
technically impossible to run JACK on a standard Android build but they
did borrow significant amounts of the JACK codebase to write their own
versions of low latency audio servers for each platform. They just left
out all the good stuff like inter app comms. midi, networking, session
management. We still hold out hope that one day they will change their
policy on Android at least.

Is what they use on Android at least an API subset as GLES is (in
theory) a subset of a corresponding OpenGL spec version? Or at
least, how that used to be the case until 3.x kind of blurred the
lines on that just a little???

I don’t use it myself, but if I ran a Linux setup with a sound driver
that was not multi-open capable (again, wasn’t ALSA supposed to fix
that?), I’d definitely install JACK and make the various DEs connect
to it, either directly or via whatever server they require.

IMO, JACK is about as close to CoreAudio as you’re going to see on a
platform where everyone does their own thing.

IMO, the combination of PA + JACK provides a more powerful system than
coreaudio and it is a cross platform solution too.

The nice thing about CoreAudio on the Mac is that generally all Mac
apps written to modern standards specifically for the Mac that do
anything substantive for the Mac use CoreAudio. That’s not the case
with JACK on Linux.

Of course the same can be said of Direct3D on Windows, and I still
prefer OpenGL as a standard, cross-platform implementation. It was
admittedly kind of hard to argue in favor of OpenGL for awhile there
simply because Direct3D represented the hardware and the trends in
how it was being used and OpenGL had a bunch of incompatible
extensions that were comparatively pretty lame. Apparently the
converse is true today. :slight_smile:

Bottom line though, it seems that supporting JACK is essential on
Linux because there simply is no single “standard” for Linux that
just works. And of the various competing standards that aren’t ALSA,
JACK is probably the best of them.

That’s my take, anyway. :slight_smile:

ALSA is two things in one. It is the hardware/driver layer which is in the
kernel and it is also the ALSA-lib API. The point of JACK is to provide an
API which deals with the challenging issues of realtime low latency audio
and interapp communication. That way the developer just has to connect to
JACK to get all the professional audio features. JACK communicates via
ALSA lib but the additional very low latency context switches are as close
to the metal as it gets and are considered a worthy tradeoff for all the
open source professional audio apps and many multimedia apps too including
blender, open movie editor, vlc, ffmpeg, mencoder, etc…

Pulse Audio is designed to deal with software mixing on the desktop in a
consumer system. That means things like connecting your headphones and
having the audio play through them automatically, handsfree bluetooth
support, and playing audio from skype, chrome and vlc at the same time. It
"hijacks" direct calls to alsa lib api and routes them through it’s own
internal graph. That gives uses the ability to adjust the volume and have
different apps sent to different audio devices.

They serve two quite different purposes.

I had a quick look at the patch from the forum and it seems that the audio
codebase has changed a bit since then. I will try to spend some time on it
this next couple of weeks to get it to compile. Otherwise the basic
functionality looks like it will work just fine. We can get some eyeballs
from the jack devs to give it the once over before it is made official
though to make sure the implementation is as efficient as possible.On Sat, October 5, 2013 2:44 pm, T. Joseph Carter wrote:

On Thu, Oct 03, 2013 at 06:51:05PM +1000, Patrick Shirkey wrote:


Patrick Shirkey
Boost Hardware Ltd