Outputting text to accessibility tools

OK so now that things seem to have calmed down… Can we go forward
with this? And can we summarize what we have figured out about API
support? I got lost on that already.

In any case the biggest problem right now is how to implement this
from the SDL API’s viewpoint. Should it be its own subsystem, part of
the video subsystem or something else? How should it be enabled?

Your idea, propose an API. :wink: Helps if you include a sample implementation for at least one platform and have at least looked to confirm that it works in the others. Mac and Linux are the most obvious because the screen readers are free. Windows is possible with NVDA being free.

Again I cite the Game Controller API: Not the prettiest implementation possible, but working code trumps theoretical perfection.

JosephSent via mobile

On Jan 16, 2015, at 14:30, Sik the hedgehog <sik.the.hedgehog at gmail.com> wrote:

OK so now that things seem to have calmed down… Can we go forward
with this? And can we summarize what we have figured out about API
support? I got lost on that already.

In any case the biggest problem right now is how to implement this
from the SDL API’s viewpoint. Should it be its own subsystem, part of
the video subsystem or something else? How should it be enabled?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I sort of figured SAPI would be a good starting place.

JosephOn Wed, Jan 21, 2015 at 10:31:37AM -0300, Sik the hedgehog wrote:

Yeah, that’s pretty much the problem here.

2015-01-20 20:15 GMT-03:00, Jared Maddox :

Remember that either the video or the events subsystem requires the
other, so this sort of dependency isn’t an impediment. I assume that
you’ll need to reserve some stuff in “video subsystem space”, but as
long as the relevant code is stored in the video subsystem
implementation files I expect that there wouldn’t be a problem.

Hmmm, now that I look into it, SDL_PumpEvents calls SDL_GetVideoDevice
which returns a structure with all the functions related to the
video subsystem. Maybe I can try using this to communicate with the
video subsystem.


Anyway, this aside: would it be OK if I make a mock-up of the API
implementing only some functionality to test? (before having anybody
trying to integrate it into SDL) Thinking about doing SAPI or
Speech-Dispatcher since those two are easy to implement without
touching SDL’s internals (and I already have some code around for
them).


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

totally not e-mailing to hide his laziness

OK, after messing around a bit with this code I’ve decided that the
best option would be if this feature was implemented the same way
renderers are (rather than as a subsystem). This may cause a bit of
inconvenience with SAPI but I think it’s easy to cope with.

So, first we’d have a new type: SDL_Reader (decided to call it reader
because let’s face it, screen readers will be the main use of this
thing)

The API would be like this for now (I know there may be demand for
more functionality, but let’s focus on the basics first):

  • SDL_CreateReader(window)
  • SDL_DestroyReader(reader)
  • SDL_ReaderSpeak(reader, text)
  • SDL_ReaderShutUp(reader)
  • SDL_ReaderRepeat(reader)

The first two are self-explanatory. SDL_ReaderSpeak outputs the text
to the screen reader (speaks in speech engines, displays in braille
screens). SDL_ReaderShutUp clears the output (shuts up in speech
engines, clears in braille screens). SDL_ReaderRepeat is like Speak,
but it repeats the last text that was sent to that window.

Finally, there would be a hint to override the backend choice,
SDL_HINT_RENDER_DRIVER. The list of drivers will of course change over
time, although this is what comes off the top of my head right now:

  • “sapi” SAPI 5.x (Windows)
  • “speechd” Speech-dispatcher (Linux)
  • “voiceover” VoiceOver (OSX, iOS)

Does this seem good for a start?