Outputting text to accessibility tools

Most of this is configured by the user in their screen reader. It is
common to output either typed letters or typed words as the user
types them. That’s only going to work right if you use a native text
input (if modified), though.

Joseph
Resident blind guy. ;)On Tue, Dec 16, 2014 at 07:40:04PM -0600, Jared Maddox wrote:

Date: Tue, 16 Dec 2014 12:10:06 -0300
From: Sik the hedgehog <sik.the.hedgehog at gmail.com>
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
<CAEyBR+Wb=9xr9KPFc2nXd5tiQC5C1Cd4fOw-m3ipjRV=-rafQQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Oh, and if somebody gets confused: SAPI and Speech Dispatcher talk to
the speech synthesis engines directly, not to the screen readers, so
ideally they should be seen only as back-up backends and not as the
proper solution. Having them around is a good idea anyway.

We should probably also have a hint to select the backend much like
the case for the renderer API.

Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user’s selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?
Call out resulting word as it’s typed, starting from scratch every
time that whitespace is inserted?
Call out all modifications when a sufficient pause occurs, and then
start from scratch?
One of the above, but with an audio note about the type of change?
Not call out the entire window, or shudder the entire document, right?

How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2014-12-17 3:00 GMT-03:00, T. Joseph Carter :

Most of this is configured by the user in their screen reader. It is
common to output either typed letters or typed words as the user
types them. That’s only going to work right if you use a native text
input (if modified), though.

Yeah, and if the screen reader itself is being used that won’t matter
since the screen reader will notice that the program is getting text
input (SDL explicitly has a text input mode) and speak whatever it
needs on its own. This really matters mostly when faking it (e.g. when
talking to a speech synthesis engine), and there you don’t have much
of an option.

Message-ID:
<CAEyBR+Wb=9xr9KPFc2nXd5tiQC5C1Cd4fOw-m3ipjRV=-rafQQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Oh, and if somebody gets confused: SAPI and Speech Dispatcher talk to
the speech synthesis engines directly, not to the screen readers, so
ideally they should be seen only as back-up backends and not as the
proper solution. Having them around is a good idea anyway.

We should probably also have a hint to select the backend much like
the case for the renderer API.

Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user’s selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?
Call out resulting word as it’s typed, starting from scratch every
time that whitespace is inserted?
Call out all modifications when a sufficient pause occurs, and then
start from scratch?
One of the above, but with an audio note about the type of change?
Not call out the entire window, or shudder the entire document, right?

How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?> Date: Tue, 16 Dec 2014 12:10:06 -0300

From: Sik the hedgehog <sik.the.hedgehog at gmail.com>
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools

2014-12-16 22:40 GMT-03:00, Jared Maddox :

Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user’s selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?

According to Orca, this. At least when I tried it and tried to type
into the terminal, it’d spell out every letter as I inputted it.

I imagine this would only matter for backends that talk to speech
synthesizers, since if the screen reader is in use instead the screen
reader should automatically handle this one (text is being entered
through the OS facilities, after all). Also I can’t say what happens
when using an IME to enter text since the speech support here doesn’t
understand Japanese at all :frowning: (it just skips over Japanese text)

How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?

Navigation and such, but I’m thinking that for the kind of programs
that would use this function it’s probably a no-brainer (and most of
the stuff that could matter could be handled by the program itself).
Any program that needs more detailed support will be most likely using
native UI controls in the first place.

If somebody else knows of some issue that could be potentially
troublesome, go ahead and say so.

OK, I think it’s time to start getting this arranged.

So, here’s what we have:

  • We probably need several backends (some platforms only have one
    option, but some have both separate screen reader and speech synthesis
    support, and we also need to account for the situation where nothing
    works) This also means adding a new hint to select the backend, as
    usual.

  • We don’t want screen reader to be turned on by default (think of
    what could happen if the backend was a speech synthesis engine and the
    user didn’t need a screen reader… program will speak even though
    it’ll get in the way)

  • Some methods are tied to UI controls so this output needs to be
    window-specific. This can be easily faked for the other backends
    though (by looking which window has focus).

Anyway, the first two points would hint that it’d be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn’t be the first time something like this happens). In
any case it seems that we’ll need to deal with the subsystems at some
point. What do people think about this?

Also if this ends up getting its own subsystem we’ll need to come up
with a name for it :stuck_out_tongue:

Q: support for speech separately? just taking advantage of the idea that if you’re doing accessibility support, you might as well add the ability to access the OS speech engine for non-accessibility reasons?

JosephSent via mobile

On Dec 29, 2014, at 07:57, Sik the hedgehog <sik.the.hedgehog at gmail.com> wrote:

OK, I think it’s time to start getting this arranged.

So, here’s what we have:

  • We probably need several backends (some platforms only have one
    option, but some have both separate screen reader and speech synthesis
    support, and we also need to account for the situation where nothing
    works) This also means adding a new hint to select the backend, as
    usual.

  • We don’t want screen reader to be turned on by default (think of
    what could happen if the backend was a speech synthesis engine and the
    user didn’t need a screen reader… program will speak even though
    it’ll get in the way)

  • Some methods are tied to UI controls so this output needs to be
    window-specific. This can be easily faked for the other backends
    though (by looking which window has focus).

Anyway, the first two points would hint that it’d be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn’t be the first time something like this happens). In
any case it seems that we’ll need to deal with the subsystems at some
point. What do people think about this?

Also if this ends up getting its own subsystem we’ll need to come up
with a name for it :stuck_out_tongue:


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2014-12-30 10:41 GMT-03:00, mr_tawan <mr_tawan at hotmail.com>:

Although I’m kinda agreed that having multi-platform accessibility wrapper
is a good thing, does it really fit in what SDL is aimed for ? I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it’s really need to) ? Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?

Well, part of the issue came from the fact that I had absolutely no
way to implement this properly without modifying SDL (since on Windows
I need to not just intercept some window messages, but also respond to
them during the window procedure, and SDL doesn’t seem to be very
helpful about this - and who knows what other requirements there could
be in other platforms!).

I’d rather get this in SDL than going insane trying to figure out some
really ugly hack (that may even have undefined behavior) and that may
still not really work (and thereby get deservedly insulted by the
users to whom I promise this feature).

I’m just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most
case.

Isn’t this already the case anyway? (and if it wasn’t then why is it
split in several subsystems that can be initialized independently?)

Now seriously, the pattern as far as I know is that SDL mostly handles
talking to the operating system specific stuff, while the satellite
libraries take care of higher level stuff (I think SDL_net is the only
exception, one could also say SDL_gpu but technically that only
understands OpenGL from what I recall, it’s not anywhere as extensive
as what SDL does). Since this is something that involves directly the
operating system APIs, that would mean it would be indeed something
that belongs to SDL.

Also honestly I’m kind of tired that every time somebody requests
something the answer is “make it into a separate library” regardless
of what is being requested.

2014-12-30 12:33 GMT-03:00, mr_tawan <mr_tawan at hotmail.com>:

Well it’s kind of counter-intuitive to have one library managed to do
everything. Actually I think ‘is it needed to be included?’ and ‘can we
separate?’ are the most important questions to ask when someone proposing a
new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer,
right ?

SDL_ttf uses its own font renderer then outputs the result to a
surface. SDL_image handles image formats and then loads them to a
surface. SDL_mixer mostly takes care of rendering audio and then
outputting it using a SDL callback. The common thing among all of
these is that they can easily be done without ever knowing what SDL is
doing inside, they just use the SDL API the same way any program would
without ever really having to deal with the operating system (except
maybe allocating memory and accessing files, but that can be done with
the standard library).

This thing is extremely operating system specific, which is the polar
opposite of what the satellite libraries do, and may even need messing
with resources that SDL reserves to itself.

Message-ID:
<CAEyBR+X9tY76EgZCYXpbsb3xnfb9rcVPZWR1BuYNKXXwZ2hPmQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

OK, I think it’s time to start getting this arranged.

So, here’s what we have:

  • We probably need several backends (some platforms only have one
    option, but some have both separate screen reader and speech synthesis
    support, and we also need to account for the situation where nothing
    works) This also means adding a new hint to select the backend, as
    usual.

The “nothing works” case may call for an app-supplied callback. Maybe
another hint, and routing via the event subsystem?

Anyway, the first two points would hint that it’d be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn’t be the first time something like this happens). In
any case it seems that we’ll need to deal with the subsystems at some
point. What do people think about this?

My main concern would be it’s interaction with the ongoing text-input
work. If I was implementing, I would want to keep them from involving
each other (the user should always be able to do it themselves without
interference from the library itself, in essence), though I don’t know
if that’s possible.

Also if this ends up getting its own subsystem we’ll need to come up
with a name for it :stuck_out_tongue:

I’m gonna take the lazy route, and say that it should be
"Accessibility" / SDL_INIT_ACCESSIBILITY :stuck_out_tongue: .> Date: Mon, 29 Dec 2014 12:57:48 -0300

From: Sik the hedgehog <sik.the.hedgehog at gmail.com>
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools

Although I’m kinda agreed that having multi-platform accessibility wrapper is a good thing, does it really fit in what SDL is aimed for ? I mean, is it better to make it a separated library (and may be having a bridge to SDL if it’s really need to) ? Also is it really needed to integrate the functionality into SDL ? Can we keep them separated ?

I’m just afraid that one day SDL might become one big monolithic platform that handle everything even if only parts of it are really used in most case.

Just my 2 cents.

Sik wrote:

Also honestly I’m kind of tired that every time somebody requests
something the answer is “make it into a separate library” regardless
of what is being requested.

Well it’s kind of counter-intuitive to have one library managed to do everything. Actually I think ‘is it needed to be included?’ and ‘can we separate?’ are the most important questions to ask when someone proposing a new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?

2014-12-30 23:13 GMT-03:00, Jared Maddox :

It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound’s territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won’t work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can’t seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).

I had at first considered just using the speech synthesis engine for
my game, but the end result is that everybody I asked complained at me
because they want to be able to use their screen readers (because they
already configured them to their needs, which vary widly among users,
especially depending on their training).

(also nitpick: speech engines aren’t useful when you’re both deaf and
blind, in which case you need a braille screen, and screen readers
don’t hook into those engines)

Courts in the US have occasionally (it doesn’t actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this… and
lost). It’s very easy to infer that it’s binding on software in
general, thus it’s something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.

One thing I want to make it clear before it becomes more confusing:
this legal requirement does not apply to software per-se. The thing
is that many countries require that there can’t be discrimination for
disabilities when offering a service, and several countries have
considered that websites count as a service (and this number keeps
increasing over time). This doesn’t mean that the browser has to be
accessible, but rather that a site should be accessible when the
browser supports it.

But yes, anything that makes it easier for developers to make their
software accessible is definitely always welcome (as it encourages
them to do it). This was another of the reasons prompting me to just
include it in SDL itself.

Message-ID: <5D275516-ABA4-442D-94D5-97CE6B915E0C at spiritsubstance.com>
Content-Type: text/plain; charset=utf-8

Q: support for speech separately? just taking advantage of the idea that if
you’re doing accessibility support, you might as well add the ability to
access the OS speech engine for non-accessibility reasons?

It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound’s territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won’t work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can’t seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).> Date: Mon, 29 Dec 2014 10:39:55 -0800

From: “T. Joseph Carter”
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools

Date: Tue, 30 Dec 2014 13:41:25 +0000
From: “mr_tawan” <mr_tawan at hotmail.com>
To: sdl at lists.libsdl.org
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID: <1419946885.m2f.46315 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Although I’m kinda agreed that having multi-platform accessibility wrapper
is a good thing, does it really fit in what SDL is aimed for ?

SDL is aimed at being a platform-abstraction layer: it’s a DOS-weight
partial-OS specialized for multi-media applications. Some of the
people that will want to use these applications will need
screen-reader or similar support. SDL should therefor provide both the
portions of the system that SHOULD be in SDL if they are to function
correctly, AND enough to use that same support in a platform-agnostic
manner.

Thus: this is necessary.

I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it’s really need to) ?

Courts in the US have occasionally (it doesn’t actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this… and
lost). It’s very easy to infer that it’s binding on software in
general, thus it’s something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.

Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?

Is it really needed to keep it out of SDL? No, it doesn’t have to be kept out.
Can we make them integrated? Yes, we can make them integrated.

I’m just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most case.

This is not an appropriate cause for fear, but instead for some
self-analysis. Many people will automatically have different ideas
about what should and what should not be in SDL. I think that textured
triangles (and maybe a batching system) should be in it. This is not
because you CAN’T do without them, but instead because those two
features allow both SDL and satellite libraries to do their jobs much
better.

Just my 2 cents.

Extending SDL isn’t apocryphal, it simply needs to be restrained.

Adding a full GUI system? THAT would be taking things a touch too far
(we already have graphics, so the support that a gui satellite library
needs is already fully implemented).

Date: Tue, 30 Dec 2014 11:20:43 -0300
From: Sik the hedgehog <sik.the.hedgehog at gmail.com>
To: sdl at lists.libsdl.org
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
<CAEyBR+WrhMVS0OJTL5D7pCz_GOgALLmWPorEtMZev7kYOt5GcA at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

2014-12-30 10:41 GMT-03:00, mr_tawan <mr_tawan at hotmail.com>:

I’m just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most
case.

Isn’t this already the case anyway? (and if it wasn’t then why is it
split in several subsystems that can be initialized independently?)

Yeah, it is.

Date: Tue, 30 Dec 2014 15:33:09 +0000
From: “mr_tawan” <mr_tawan at hotmail.com>
To: sdl at lists.libsdl.org
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID: <1419953589.m2f.46317 at forums.libsdl.org>
Content-Type: text/plain; charset=“iso-8859-1”

Sik wrote:

Also honestly I’m kind of tired that every time somebody requests
something the answer is “make it into a separate library” regardless
of what is being requested.

Well it’s kind of counter-intuitive to have one library managed to do
everything.

SDL doesn’t do “everything”, and won’t with this extension either.
Now, if SDL directly integrated the satellite libraries? THAT would be
"doing everything".

What SDL is SUPPOSED to do is act as an abstraction layer, a
"quasi-OS" that provides you with a generic API for things that would
otherwise require entirely platform-specific code. This is what SDL 1
was created for, and this is what SDL 2 is designed for. This is why
SDL 2 allows you to specify your own OpenGL library, but doesn’t
actually implement one itself: that bit’s already abstract, the
problem is in the initialization.

Actually I think ‘is it needed to be included?’ and ‘can we separate?’ are
the most important questions to ask when someone proposing a new feature.

“Does it make more sense combined or seperate?” is the question that
should actually be asked, because the ones you listed express the
implication that the correct answer is “Seperate”, regardless of
reality.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?

Sik has said that any extension version will be a hack, and I believe
that the mention was in the very message you were replying to. If you
want to implement a full-featured extensions API for SDL 2 then it can
be done as a library without problems, but you’ll have to actually
implement said extensions API.

Ah, but in SDL_net’s case, the OS-specific stuff is actually pretty
generic and platform-independent as-is. With just a few exceptions
mostly for Windows, and even those in practice are mostly quick
redefinitions of constants. Mostly?the exceptions are significant
and important. But that’s all the more reason for there to be a
library to abstract that out for you if you’re not comfortable with
those differences.

And the helper libs hosted on libsdl.org kind of rank a bit higher
than the others, especially now that there’s no longer a place on the
website to help you find SDL-using projects (games, apps, and helper
libs?)

JosephOn Tue, Dec 30, 2014 at 11:20:43AM -0300, Sik the hedgehog wrote:

Now seriously, the pattern as far as I know is that SDL mostly handles
talking to the operating system specific stuff, while the satellite
libraries take care of higher level stuff (I think SDL_net is the only
exception, one could also say SDL_gpu but technically that only
understands OpenGL from what I recall, it’s not anywhere as extensive
as what SDL does). Since this is something that involves directly the
operating system APIs, that would mean it would be indeed something
that belongs to SDL.

What I think makes sense is:

  1. Having access to send text to be spoken by a screen reader.
  2. Having access to system text to speech (not always the same).
  3. Knowing when the user wants something read (either at the
    keyboard cursor or at the mouse cursor) and perhaps a hint of what
    they want to know if we have such a thing.

Anything beyond that is well outside SDL’s scope. But that much
gives your SDL-using apps access to speech if the system has it, and
it gives them a way to write accessibility if they want to.

Visual cues for the deaf, monaural mixing, non-traditional input
devices, colorblind-friendly settings, high contrast text (exactly
not like FROZEN BUBBLE!), whatever else ? Those are things for the
application to do if they’re wanted.

Deep accessibility features don’t belong in SDL, but access to the
API might not be over the top.

JosephOn Tue, Dec 30, 2014 at 01:41:25PM +0000, mr_tawan wrote:

Although I’m kinda agreed that having multi-platform accessibility wrapper is a good thing, does it really fit in what SDL is aimed for ? I mean, is it better to make it a separated library (and may be having a bridge to SDL if it’s really need to) ? Also is it really needed to integrate the functionality into SDL ? Can we keep them separated ?

I’m just afraid that one day SDL might become one big monolithic platform that handle everything even if only parts of it are really used in most case.

Just my 2 cents.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Q: support for speech separately? just taking advantage of the idea that if
you’re doing accessibility support, you might as well add the ability to
access the OS speech engine for non-accessibility reasons?

It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound’s territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won’t work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can’t seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).

SDL_mixer doesn’t actually have access to window events, SDL
internals, or anything of the sort. Speech synthesis is only “part
of sound” on Linux?anywhere else it’s an OS call you feed a string
to. And again, since when did SDL_mixer handle your mouse?

Although I’m kinda agreed that having multi-platform accessibility wrapper
is a good thing, does it really fit in what SDL is aimed for ?

SDL is aimed at being a platform-abstraction layer: it’s a DOS-weight
partial-OS specialized for multi-media applications. Some of the
people that will want to use these applications will need
screen-reader or similar support. SDL should therefor provide both the
portions of the system that SHOULD be in SDL if they are to function
correctly, AND enough to use that same support in a platform-agnostic
manner.

Thus: this is necessary.

Okay, for just one moment I need to take off the busy and disgruntled
developer from the days of yore hat.

I remember back in the day someone took old-school Quake 1 and redid
its sound system completely to use positional audio and otherwise
did away with the sound mixer’s (many) quirks. Then they blacked the
screen. The blind players wiped the floor with the sighted ones.
Menus were not accessible though because it was a research project
rather than an intent to create an accessible FPS. And the results
of the “research” were that the game wasn’t “fair” for the sighties.
Give people back their screens and give everyone some optical camo
(Snake? Snake?! Snaaaaaaaaake!!) so that people cannot be seen any
further away than they could be heard and use that kind of positional
audio setup, and it’ll be a fair deathmatch. :slight_smile:

I don’t see many people going out of their way to make games that are
accessible, but if we can help make it easier for them to do it, that
should exist somewhere. Sik says it can’t really go somewhere other
than SDL, and knowing a little about accessibility toolkits (though
not a lot admittedly), he’s right.

Some people on this list already know that I am legally blind. I
don’t actually need speech output in anything really, but I often use
it anyway to save on eyestrain to read long posts and whatnot. In
any game I can configure the font size, I never even worry about it.
And the fact is that there are a whole lot of games I would LOVE to
play if I could read the damned in-game text. Fallout, *craft, you
name it.

But I can’t. Because ultimately I’m legally blind. I’m never going
to be able to read small print in a game, certainly not real-time.
It’s one of the things on a list that’s growing shorter all the time.
In the past 20 years, I’ve been able to drive cars, shoot guns, and
use gadgets so visually-oriented that they don’t actually have
physical buttons. But I still can’t play Starcraft. Not because
Starcraft is something unplayable, but because the text is too small
and I can’t make it bigger enough to read without stopping the game
and pulling out a magnifier to slowly read my screen.

And while I doubt adding OS-abstracted events for “the user wants
read”, “next/previous item”, “adjust control up/down” (which
if you think those things belong in SDL_mixer, I want a crate of
whatever you’re smoking). Speech output on every platform that isn’t
Linux is completely different than PCM audio as well, so there’s no
reason why SDL’s implementation shouldn’t include the remaining
OS-abstracted calls such as maybe SDL_AccesSetAccessLabel,
SDL_AccessSpeakText, and SDL_AccessShutUp (I’m now lobbying for that
last one as the function name, even though I’m sure the idiom doesn’t
fit other languages?) These usually work without even thinking about
opening a sound device, and probably that’s true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain’t one of them.

Actually, it’s not unheard of for the “spoken” text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer. Some special-purpose programs over the years output
different things on speech and Braille devices, but that’s not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

I hope you’re not saying that accessibility shouldn’t be done,
because I would find that deeply offensive in 2015. Most of the
civilized world has concluded by now that the disabled should not be
excluded as a general rule. That wasn’t possible under DOS, but it
sure is possible today. If Apple can design a buttonless interface a
blind person can operate, the NFB can design a car that we can drive
blindfolded (not that drives itself mind you, but that a blind person
can drive), then Xerox can make copiers that we can use, Panasonic
can design microwaves we can operate out of the box, OSes can feature
accessibility from the installation onward, and SDL can provide a
handful of functions to shove speech out to the OS and pass the
special keyboard or gesture commands back.

If you’ve got a problem with that, I suggest you might want to
migrate to the 21st century. Because the attitude that the disabled
aren’t worth consideration or qualify as “bloat” or “cruft” is the
kind of thing that can and does result in very expensive lawsuits
with damage awards these days. And rightfully so, if filed under
those circumstances. You would not tolerate discrimination against
someone because of their skin color or sexual orientation or religion
anywhere else, so why the hell should we accept status as second
class citizens of the modern world?

Now, if the thought never crossed your mind, that’s one thing. Or if
you can’t figure out how to make something accessible, that’s also
fine. There are things I don’t know how to make accessible, and I’m
the blind dude on the mailing list. If I don’t know how to do it
even conceptually, how can anyone else be expected to have the
answer? That’s the biggest reason why I would argue that SDL’s
accessibility support would have to be thin, BTW?I can’t imagine how
else to implement it in an OS-agnostic way.

But when accessibility support becomes less about “didn’t” or
"couldn’t", and more about “won’t” or even “shouldn’t”, you better
believe I start getting surly.

I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it’s really need to) ?

Courts in the US have occasionally (it doesn’t actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this… and
lost). It’s very easy to infer that it’s binding on software in
general, thus it’s something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.

MANY websites are only “mostly” accessible. If you can basically
make it work more or less, even if it’s not easy, you don’t have a
leg to stand on to sue.

However, if you are a public commercial enterprise, and it is
actually impossible to access the checkout button of your website
without being able to see and click on it, there’s a problem. If we
then approach you with the problem and offer you the code to fix it,
and you REFUSE? Ask Target how that worked out when we sued them.
(Hint: They lost.)

Didn’t or couldn’t vs won’t or shouldn’t. Target argued that online
shopping was only for able-bodied people. The disabled could just
walk into their stores if they had a problem with the website. And
they shouldn’t have to go and make their buttons clickable just
because some disabled people couldn’t use their broken javascript.

That could have cost Target dearly, if we wanted to make it an
expensive lesson. But all we asked was that they fix it. Cost them
their web developers’ time to implement the fix and some court costs.

Likewise, we asked Apple to make the iPod accessible and they said
there weren’t enough blind people out there who listened to music for
them to worry about it. We educated them as to the depth of their
error in thinking. Again, it could’ve been a very expensive lesson
for them, but we went after fixing it more than a payday. And they
implemented the fix we asked for (spoken names of songs as m4a files,
along with the menus) because it actually was an easy fix.

But they also took the lesson to heart. The next iPhone had a screen
reader that was revolutionary. Webkit went from zero to accessible
in one major OS revision. Apple improved their magnifier and created
VoiceOver on the Mac. And Apple’s accessibility push was so complete
and profound that it literally forced Microsoft to do the same to
Windows 8?it no longer costs blind people $1000 on top of the price
of a computer for the privilege of being able to use it. (I wasn’t
involved in the Apple lawsuit at all actually, but I approve of the
outcome most assuredly!)

Including a few hooks for the OS’s own accessibility features won’t
make video games accessible to anyone. But putting the ability for
game developers to do it into games will hopefully encourage people
to at least consider it. After all, DOS couldn’t do unicode either
and SDL now does that exceptionally well. And some day, I’ll figure
out how it works and start using it. Because it’s worth doing for
the people who need it.

Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?

Is it really needed to keep it out of SDL? No, it doesn’t have to be kept out.
Can we make them integrated? Yes, we can make them integrated.

It’s already been discussed that SDL fundamentally needs to be
changed to make accessibility possible, library or not. But as the
support required entails translating some system events to SDL events
and providing a wrapper around a system that fundamentally gets
passed a string when appropriate, I’d say it’s just as important as
supporting unicode, and for just the same reasons. You COULD put
that in a library. You shouldn’t though.

I’m just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most case.

This is not an appropriate cause for fear, but instead for some
self-analysis. Many people will automatically have different ideas
about what should and what should not be in SDL. I think that textured
triangles (and maybe a batching system) should be in it. This is not
because you CAN’T do without them, but instead because those two
features allow both SDL and satellite libraries to do their jobs much
better.

Actually, I have interest in being able to backend SDL into libretro
for a few things, which largely involves being able to gut large
components from SDL as it is normally compiled/installed and build a
small one-target library. Probably a static one at that.

That would seem to run cross-purposes to things like adding lots of
features like accessibility, but so much of SDL is modular and the
modules don’t often have a huge degree of interdependency. That
doesn’t mean we shouldn’t exercise some stewardship over what does
and doesn’t go into the library, but it does mean that some things
should go into the library because they belong there, even if someone
else might find them to be unnecessary at this time. The renderer
for 2D games and the GameController API are examples I’ve cited of
this recently. The one is totally irrelevant to any modern 3D title,
and the other already is a helper lib that was just stuck in the
trunk because Valve wanted it for Steam. And today nobody would
really argue that either thing didn’t belong in SDL.

Extending SDL isn’t apocryphal, it simply needs to be restrained.

Adding a full GUI system? THAT would be taking things a touch too far
(we already have graphics, so the support that a gui satellite library
needs is already fully implemented).

Not only that, but GUI is such a nebulous concept that people’s needs
are going to be wildly different. The GUI I would need for a game’s
menus is going to be a lot more simplistic than you might want for a
3D modeling program. It doesn’t even make sense to implement one in
terms of the other most of the time.

SDL doesn’t do “everything”, and won’t with this extension either.
Now, if SDL directly integrated the satellite libraries? THAT would be
"doing everything".

What SDL is SUPPOSED to do is act as an abstraction layer, a
"quasi-OS" that provides you with a generic API for things that would
otherwise require entirely platform-specific code. This is what SDL 1
was created for, and this is what SDL 2 is designed for. This is why
SDL 2 allows you to specify your own OpenGL library, but doesn’t
actually implement one itself: that bit’s already abstract, the
problem is in the initialization.

I don’t see it as any way related to an OS. I see it as a way to not
care about an OS at all, FWIW. General rule in my mind is that
nothing outside of SDL should need to know about things like that.
In practice there will likely be some, but it should be limited.

“Does it make more sense combined or seperate?” is the question that
should actually be asked, because the ones you listed express the
implication that the correct answer is “Seperate”, regardless of
reality.

I see the following possibilities for any thing you might do:

  1. It should not be done anywhere.
  2. It should be done outside of SDL.
  3. It should be #2, but SDL needs to be enhanced so it can be.
  4. It should be part of SDL itself.

Normally, any public function in SDL makes its way into your program
via SDL.h. I can see that being otherwise for certain more internal
bits (say of the renderer) which are frozen for the current ABI and
exported so that you can extend SDL from the outside, but that aren’t
really intended for use by most programs.

I dunno if that’s a good idea, but it’s one that is rattling around
in my head.

JosephOn Tue, Dec 30, 2014 at 08:13:47PM -0600, Jared Maddox wrote:

It is a reasonable thing to suggest the idea of an extra library, but
if it ain’t possible with SDL as it is, or at least if it ain’t
practical with SDL as it is, the question isn’t whether or not it
should be done outside of SDL, but rather if it should be done at
all. Because if it’s going to be done at all, SDL reasonably has to
be changed in some way, either to allow it to be a helper lib, or to
include the functionality directly.

And again, which it should be is not always evident from the outset.
As I said last night, the GameController API is fully an extension
library baked in to SDL proper, basically because Valve wanted it.
Turns out that it’s a very good and useful thing, if a little limited
in scope simply because it’s exactly what Valve wanted and neither
more nor less. But that’s what an ABI-breaking 2.1 is for.

JosephOn Tue, Dec 30, 2014 at 03:33:09PM +0000, mr_tawan wrote:

Also honestly I’m kind of tired that every time somebody requests
something the answer is “make it into a separate library” regardless
of what is being requested.

Well it’s kind of counter-intuitive to have one library managed to do everything. Actually I think ‘is it needed to be included?’ and ‘can we separate?’ are the most important questions to ask when someone proposing a new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?

Damn, this took a while to read x_x; (that I’m doing other stuff at
the same time isn’t helping matters)

2015-01-02 19:07 GMT-03:00, T. Joseph Carter :

SDL_AccessShutUp

Hah! The only problem is that the only time you’d want to explicitly
shut up the screen reader is if the text is gone in the first place,
so speaking an empty string may do the job as well. (and when it’s the
user wants to make the screen reader shut up that’s the screen
reader’s job, not the program’s)

I suppose that even then it still wouldn’t hurt even if it ends up as
just a wrapper function (could help with code clarity, maybe?).

These usually work without even thinking about
opening a sound device, and probably that’s true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain’t one of them.

Linux has Speech Dispatcher, and it’s installed by default at least in
the case of Ubuntu (since Orca needs to make use of it), though I
gotta admit, the default speech engine leaves a lot to be desired…
but yeah it’s there. And yeah, it works without having SDL initialize
the sound (the daemon is a separate process, after all…).

Actually, it’s not unheard of for the “spoken” text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer.

This is the main reason why I wasn’t happy with SAPI and Speech
Dispatcher and instead wanted a way to ensure text went to screen
readers (the other issue being that they don’t follow screen reader
settings which is guaranteed to infuriate users).

Some special-purpose programs over the years output
different things on speech and Braille devices, but that’s not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

Yeah, I think the only way to tell for sure is to talk to the screen
reader directly which isn’t feasible without using their proprietary
APIs (and not all of them provide one, either). I’d say that this is
most likely low priority for now anyway, let’s focus on the most
important aspect i.e. outputting text in the first place.

Damn, this took a while to read x_x; (that I’m doing other stuff at
the same time isn’t helping matters)

The problem with big replies is that they tend to generate big
replies themselves.

2015-01-02 19:07 GMT-03:00, T. Joseph Carter <@T_Joseph_Carter>:

SDL_AccessShutUp

Hah! The only problem is that the only time you’d want to explicitly
shut up the screen reader is if the text is gone in the first place,
so speaking an empty string may do the job as well. (and when it’s the
user wants to make the screen reader shut up that’s the screen
reader’s job, not the program’s)

I suppose that even then it still wouldn’t hurt even if it ends up as
just a wrapper function (could help with code clarity, maybe?).

Trust me, the user of the screen reader will want to shut it up all
the time. Probably they have a system-wide keybinding for this, but
if your stuff somehow overrides that or is self-voicing, probably the
function will be wanted.

These usually work without even thinking about
opening a sound device, and probably that’s true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain’t one of them.

Linux has Speech Dispatcher, and it’s installed by default at least in
the case of Ubuntu (since Orca needs to make use of it), though I
gotta admit, the default speech engine leaves a lot to be desired…
but yeah it’s there. And yeah, it works without having SDL initialize
the sound (the daemon is a separate process, after all…).

Speech on Linux is simply godawful. If you don’t have a license for
something like a Nuance engine that sounds human, you have academic
research stuff, optimized versions of the same, and old-school
hardware speech chips if you can find one. And you can, usually the
DoubleTalk, which sounds like DoubleSh*t. Actually, going along with
the ability to shut the thing up us the ability to generate very fast
speech without swallowing syllables or even just phonemes and
morphemes. Which is a fancy way of saying that you need your speech
engine to be able to blather like an auctioneer and still understand
what the thing is saying. That’s usually more important than natural
prosody or human characteristics like the IMO kind of weird taking a
breath sound made by Apple’s Alex synth.

The best speech I’ve ever heard out of a speech chip was actually one
local guy’s JAWS for DOS setup back in the day for the Accent synth.
That synth is based on the same physical speech chip as was used in
the original Speak-n-Spell. It frankly didn’t sound much better with
default settings. But he’d managed to build a voice profile that
sounded great to me, a guy who otherwise preferred the Keynote Gold
for precisely the reason just stated: When sped up, you hear every
phoneme and morpheme distinctly, even if the entire speech engine
sounds like a guy trying to speak very clearly while holding his
nose. Ahh, those were the daze.

Actually, it’s not unheard of for the “spoken” text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer.

This is the main reason why I wasn’t happy with SAPI and Speech
Dispatcher and instead wanted a way to ensure text went to screen
readers (the other issue being that they don’t follow screen reader
settings which is guaranteed to infuriate users).

Also it’s quite likely that a visually impaired user will have
configured their screen reader for optimal reading performance for
their needs b u t l e a v e t h e d e f a u l t s l o w ,
a n n o y i n g s e t t i n g for the default system voice. I
haven’t done that because I tend to use the default system voice with
a “read this” key command far more often than an actual screen
reader.

Some special-purpose programs over the years output
different things on speech and Braille devices, but that’s not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

Yeah, I think the only way to tell for sure is to talk to the screen
reader directly which isn’t feasible without using their proprietary
APIs (and not all of them provide one, either). I’d say that this is
most likely low priority for now anyway, let’s focus on the most
important aspect i.e. outputting text in the first place.

If you talk to T.V. Raman about emacspeak and start talking about
screen readers, he’ll start making fun of you. Emacspeak isn’t as
screen reader. Rather, it is speech access to the internal state of
emacs, which of course is a fully functional environment you never
need to leave anyway.

Take for example my tmux status bar, reproduced below in a squished
format:

"[0] 0:Python- 1:mutt* 2:bash Sun Jan 04 15:28 "

How does a screen reader interpret that? Usually by trying to guess.
It has to know what those things mean. Is “Sun” the word sun, or is
it intended to be an abbreviation for Sunday? A screen reader must
expend effort trying to figure that out. If this were emacspeak or a
similar embedded environment that was self-voicing, it would know
what those numbers and punctuation marks on the left mean, and that
the thing on the right was a date. I could thus read each
appropriately.

What’s my current window? “Window one.” Or more verbosely, “Window
one, currently running mutt.” The datetime would be read as “Sunday,
January fourth, fifteen twenty-eight.”, or, more tersely, “Fifteen
twenty-eight.”. Something to keep in mind for your SDL apps is that
if you’re sending stuff to the screen reader yourself, rather than
having it try to scrape what you’re sticking in the window, it
doesn’t have to say what it does on the screen.

The thing to remember is that visual access to a screen is inherently
random-access, but speech or even Braille displays tied in to a
screen reader are inherently serial access. Hence the value of the
shut up keystroke. If you need to know it’s Sunday, the rest of the
datetime is irrelevant and you’ve got stuff to do.

Just some advice for how SDL’s hopefully soon-to-be-developed
accessibility features should be used once they’re available, from a
legally blind user who was designing accessible UX back before
Windows 95 was actually a thing. For experienced users, it’s all
about what do I really need to know, and how do I most quickly get
that information. The elderly (or those using speech to supplement
decaying vision) and the inexperienced people still trying to think
visually want more parity between a spoken and a displayed interface.

Used to be we had pretty good access (custom-designed solutions), and
acceptable screen scraping of known DOS apps. Then as things got
graphical our access became less perfect. Nowadays with actual
access labels on controls and views, we’re beginning to regain the
access we had when the interfaces were all custom-designed for our
benefit. It’s kind of cool, actually, and I’m excited to see SDL
benefiting from the modern push in the hopefully near future.

JosephOn Sat, Jan 03, 2015 at 07:26:21AM -0300, Sik the hedgehog wrote:

2015-01-04 20:42 GMT-03:00, T. Joseph Carter :

Trust me, the user of the screen reader will want to shut it up all
the time. Probably they have a system-wide keybinding for this, but
if your stuff somehow overrides that or is self-voicing, probably the
function will be wanted.

What I meant is that you could probably achieve the same effect by
just calling SDL_AccessSpeakText(""), rendering SDL_AccessShutUp()
kind of pointless.

How does a screen reader interpret that? Usually by trying to guess.
It has to know what those things mean. Is “Sun” the word sun, or is
it intended to be an abbreviation for Sunday? A screen reader must
expend effort trying to figure that out. If this were emacspeak or a
similar embedded environment that was self-voicing, it would know
what those numbers and punctuation marks on the left mean, and that
the thing on the right was a date. I could thus read each
appropriately.

Oh, I thought you were talking about distinguishing between speech and
braille output, to account for the inherent differences in the output
medium.

But yeah, isn’t that the whole point to having separate accessibility
text? The program displays one thing to the screen, but the tools see
something else which is more appropriate. Kind of how the alt text
works with the img element in HTML (at least when used properly). This
would already come as-is with the proposed API, the bigger problem
would be educating developers to understand how to use it properly -
which I imagine we should not have a problem with it, right?