SDL gesture API

We’re doing final API evaluation for SDL 2.0.

Does anyone use the SDL gesture recognition API?
SDL_DOLLARGESTURE, SDL_MULTIGESTURE, etc.

Would anyone be opposed to having it be pulled out into a separate library?

Cheers!
–Sam

I use it and I think pulling it out makes a lot of sense.

Cheers,
Richard.On Sun, Mar 3, 2013 at 1:59 PM, Sam Lantinga wrote:

We’re doing final API evaluation for SDL 2.0.

Does anyone use the SDL gesture recognition API? SDL_DOLLARGESTURE,
SDL_MULTIGESTURE, etc.

Would anyone be opposed to having it be pulled out into a separate library?

Given that the main SDL library is bare bones, gestures as an external
library makes a lot of sense.On Sat, Mar 2, 2013 at 8:14 PM, Richard Tew <richard.m.tew at gmail.com> wrote:

On Sun, Mar 3, 2013 at 1:59 PM, Sam Lantinga wrote:

We’re doing final API evaluation for SDL 2.0.

Does anyone use the SDL gesture recognition API? SDL_DOLLARGESTURE,
SDL_MULTIGESTURE, etc.

Would anyone be opposed to having it be pulled out into a separate
library?

I use it and I think pulling it out makes a lot of sense.

Cheers,
Richard.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I don’t use touch events (I don’t even have touch hardware) so I can’t
comment about this. And yeah, at first glance, having that as a
separate library indeed makes sense, especially considering all the
kinds of gestures that could be there.

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

2013/3/3, Nicholas Rishel <rishel.nick at gmail.com>:> Given that the main SDL library is bare bones, gestures as an external

library makes a lot of sense.

On Sat, Mar 2, 2013 at 8:14 PM, Richard Tew <richard.m.tew at gmail.com> wrote:

On Sun, Mar 3, 2013 at 1:59 PM, Sam Lantinga wrote:

We’re doing final API evaluation for SDL 2.0.

Does anyone use the SDL gesture recognition API? SDL_DOLLARGESTURE,
SDL_MULTIGESTURE, etc.

Would anyone be opposed to having it be pulled out into a separate
library?

I use it and I think pulling it out makes a lot of sense.

Cheers,
Richard.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

Every operating system I am familiar with that has gestures also has the
ability disable them to get raw touch data.

I was under the impression they just gave out both of them at the same
time and it was up to the program to decide what to use. I was talking
more about whether it’s worth using the built-in gesture support
provided by the operating system or not.

In any case I imagine that just handling gestures off raw data
directly may be better in the long term, especially to add support for
gestures not supported as-is by the operating system. My only worry
would be any system settings that could tweak said built-in gestures,
but I could imagine that it still could be faked by retrieving the
settings and adjusting the gesture processing accordingly.

2013/3/3, Nicholas Rishel <rishel.nick at gmail.com>:>>

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

Every operating system I am familiar with that has gestures also has the
ability disable them to get raw touch data.

I’m not sure I get your concern. The intent of a gesture library is that it
abstracts away the idea of touch. If you were, for instance, going to use
the system’s gesture interface then you can’t effectively build your own on
top of it because the touch information you want was hidden in the form of
a simplified set of gesture events instead of touch events.

So, in short, when getting events from the system you would specify whether
you want to receive the OS’s gesture events or touch events. If you want to
build your own gesture library, you opt to receive touch events. If you
just want simple, common touch tasks - zoom, scroll, “click” - you opt for
gesture events from the OS.

In SDL I believe that the default is to request touch events, with a flag
to request OS gesture events (someone correct me if I’m wrong!). Once SDL
gets this it is again abstracted to SDL event: either touch events, system
gesture events, or another library gesture events.

Hopefully this clears things up and I haven’t provided any misleading
information. <.<’'On Sun, Mar 3, 2013 at 6:47 AM, Sik the hedgehog <sik.the.hedgehog at gmail.com wrote:

I was under the impression they just gave out both of them at the same
time and it was up to the program to decide what to use. I was talking
more about whether it’s worth using the built-in gesture support
provided by the operating system or not.

In any case I imagine that just handling gestures off raw data
directly may be better in the long term, especially to add support for
gestures not supported as-is by the operating system. My only worry
would be any system settings that could tweak said built-in gestures,
but I could imagine that it still could be faked by retrieving the
settings and adjusting the gesture processing accordingly.

2013/3/3, Nicholas Rishel <@Nicholas_Rishel>:

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

Every operating system I am familiar with that has gestures also has the
ability disable them to get raw touch data.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I was just wondering if there’s anything we could miss by not using
the operating system’s own functionality (or if doing it would be a
downgrade).

Going to be blunt, just asking these questions mostly to make sure we
don’t miss any tiny detail that could end up being important in the
long term.

2013/3/3, Nicholas Rishel <rishel.nick at gmail.com>:> I’m not sure I get your concern. The intent of a gesture library is that it

abstracts away the idea of touch. If you were, for instance, going to use
the system’s gesture interface then you can’t effectively build your own on
top of it because the touch information you want was hidden in the form of
a simplified set of gesture events instead of touch events.

So, in short, when getting events from the system you would specify whether
you want to receive the OS’s gesture events or touch events. If you want to
build your own gesture library, you opt to receive touch events. If you
just want simple, common touch tasks - zoom, scroll, “click” - you opt for
gesture events from the OS.

In SDL I believe that the default is to request touch events, with a flag
to request OS gesture events (someone correct me if I’m wrong!). Once SDL
gets this it is again abstracted to SDL event: either touch events, system
gesture events, or another library gesture events.

Hopefully this clears things up and I haven’t provided any misleading
information. <.<’’

On Sun, Mar 3, 2013 at 6:47 AM, Sik the hedgehog <@Sik_the_hedgehog wrote:

I was under the impression they just gave out both of them at the same
time and it was up to the program to decide what to use. I was talking
more about whether it’s worth using the built-in gesture support
provided by the operating system or not.

In any case I imagine that just handling gestures off raw data
directly may be better in the long term, especially to add support for
gestures not supported as-is by the operating system. My only worry
would be any system settings that could tweak said built-in gestures,
but I could imagine that it still could be faked by retrieving the
settings and adjusting the gesture processing accordingly.

2013/3/3, Nicholas Rishel <rishel.nick at gmail.com>:

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

Every operating system I am familiar with that has gestures also has
the
ability disable them to get raw touch data.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I don’t use touch events (I don’t even have touch hardware) so I can’t
comment about this. And yeah, at first glance, having that as a
separate library indeed makes sense, especially considering all the
kinds of gestures that could be there.

I’d like to have more accurate gesture detection and built-in support
for a standard set of gestures in SDL, in much the same way that iOS
or Android provide these things. But the existing dollar and
multigesture support is not sufficient, and my impression was always
that it should probably be removed or upgraded. The dollar gestures
are not accurate enough and each developer has to implement standard
gestures themselves. I’ve implemented pinch and zoom according to
about five different web pages and the Android OS source code, and
obtained different and inaccurate results in various cases. My
assumption is that having stock support for standard gestures would
ensure that different apps have a standard quality feel.

Just a question though: doesn’t the operating system handle some
gestures on its own, i.e. it parses the movement and sends an event to
the program with the relevant gesture? Or is that just a
misunderstanding? (or if I’m right, is there any OS where built-in
support for gestures is too lackluster to be worth using?)

Yes, like pinch and zoom.

I mention this because if the OS handles gestures then a separate
library may have it hard to reuse that functionality directly. Though
maybe it’s better to just do everything manually in the long term and
ignore the OS when it comes to gestures (except maybe for retrieving
user settings, but it wouldn’t need SDL for that).

If I recall correctly, Android couples it’s standard gesture detection
to UI widgets and the SDL display does not inherit the correct object
to make these available. Unless I missed something, I don’t think
this is a concern.

Cheers,
Richard.On Mon, Mar 4, 2013 at 12:11 AM, Sik the hedgehog <sik.the.hedgehog at gmail.com> wrote:

We’re doing final API evaluation for SDL 2.0.

Does anyone use the SDL gesture recognition API? SDL_DOLLARGESTURE, SDL_MULTIGESTURE, etc.

Would anyone be opposed to having it be pulled out into a separate library?

Cheers!
–Sam

I do not use them but I think having them in a separate library makes
a lot of sense: it slims SDL main library, expands the separate
library development process and unites additional gestures from
release schedule of main SDL.

Cheers,
VittorioOn 03/mar/2013, at 01:59, Sam Lantinga wrote: