How do desktop OSes handle touch events for starters? Maybe it’d be
easier to adapt the API if we considered how we’re expected to use
touch events there for starters.
2013/4/3, Alex Barry <alex.barry at gmail.com>:> So, to summarize all the possible issues, and prioritizing them, we have:
- We need to be able to definitely determine if a touch device is a
display, or a separate device- If it’s a display, do the normalized coordinates match the focused
window, or the whole display?- If it’s a separate device, are coordinates normalized on the focused
window, or a single display, or all the displays in some sort of
combination?Is it safe to assume that the touch events were built only with mobile
devices in mind? If so, for the sake of release, should we limit the touch
API visibility/usability to mobile devices?
If not, is there a consistent way, on all supported OSes, to determine if a
touch device is also a display (or even associated with a specific
display)? It appears like MSDN may have something relevant
herehttp://msdn.microsoft.com/en-ca/library/windows/desktop/dd317318(v=vs.85).aspx.
Not sure about
Xorghttp://www.x.org/wiki/Development/Documentation/Multitouch#Event_processing,
since all the API links are broken on that page. I have no experience with
Mac, so I’m not sure what sort of documentation to look for.On Wed, Apr 3, 2013 at 11:16 AM, Drake Wilson wrote:
Quoth Sik the hedgehog <@Sik_the_hedgehog>, on 2013-04-03
11:36:35 -0300:Wacom tablets are a whole different beast than usual. Yes, they’re
touch devices, and they mirror what’s on the monitor, but they’re also
analog (in the sense that you can detect how much pressure is there,
not just where). Also you have to factor the pen into the equation
since that’s how touch works on them, no idea if touch works without
the pen.It depends an awful lot on the device. My Bamboo tablet advertises
multitouch input plus pen rather heavily in the packaging, though
I’m not sure both are supported simultaneously. And some drawing
tablets of similar types do have displays as well that can mirror the
primary monitor. And there’s pen tilt sensors. I don’t recall the
exact set of possibilities just now, but it’s pretty wide.The pen also has several buttons on them. I think they behave as extra
mouse buttons, although I don’t remember from which number (8 onwards,
I believe?).It’s reconfigurable depending on the device and on the OS. And pens
can have multiple tips or ends, which have different IDs so they can
be distinguished by graphics programs to activate different tools.
And sometimes they can be put into relative versus absolute mode.
And…I doubt it makes sense for SDL to try to handle every such case, but
picking a consistently bounded subset is fairly important. Designing
UI for spatially disjoint touch input can be substantially different
from designing it for display-attached touch, among other things—the
set of target configurations with which any particular application
is going to be practically usable is going to be pretty narrow unless
a lot of work is put in by the application developers.That said, I loosely agree with Gabriel as far as being able to query
the association of touch devices and windows with shared 2D geometry
spaces (displays, separate pads, etc.), but I don’t know whether it’s
practical in all the targeted environments.—> Drake Wilson
SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org