Donny Viszneki wrote:
Doing multiple mice for multi-touch input – works – but it’s not ideal.
?Here are my thoughts:
Mice are persistent, but touches come into being and cease to exist all the
time. ?So associating touches with a finite number of mouse indices is
logically tenuous and awkward to program. ?The mouse indices cease to be
meaningful (two different touch events should really have different indices)
and you can get “information” about touches that don’t even really exist (if
you’re not careful).
Some mouse concepts also just don’t apply to touches – cursors and mouse
warping, for example.
I would agree that this is a compelling argument were it not for two facts:
“Touch interfaces” share all the characteristics you’ve pointed out
with common drawing tablets, yet no drawing tablet API I have seen
deviates from the traditional mousing APIs (with the critical
exception of pressure sensitivity.)
SDL has supported tablets-as-mice since its inception, and has
supported tablets with pressure sensitivity since 1.3.
With touch input, gestures like tap, double tap, etc are common but can be
problematic to detect. ?A touch API could help with this by allowing the
programmer to pass time thresholds for multi-tapping and then have these
gestures passed as events.
While this may be useful…
A) Not really in the spirit of SDL to interpret user input. Perhaps a
satellite library? SDL_gestures?
B) Plenty of GUIs intended for traditional mice – not touch
interfaces – provide double- and even triple-click interfaces. Yet
all this time, Sam has apparently not been compelled to support those
at the Simple/direct Media Layer.
To contradict myself a bit though, in the spirit of argument:
C) SDL does have interfaces for key repeat, yet that facility does not
tie in with the desktop environment’s key repeat functionality, thus
negating the user’s key repeat settings.
Having said all this, I still think SDL’s current API is perfectly
well suited. I would probably favor adding a “transient” boolean field
to the mouse/pointer/touch SDL event structure before giving
"different touch events … different indices." Oh, perhaps you only
meant simultaneous touches… that would make a great deal of sense-BUT
is that not already how it’s done in SDL-on-iPhone? After all, if I
had made the iPhone, a single isolated touch would always have the
same device/touch ID, and the second simultaneous touch would always
have the second ID.
I suppose one might think that confusing if one wanted to support an
A-Down, B-Down, A-Up, A-Down situation.
I’d argue, however, that if this is a gesture your app actually needs
special support for in such a way that it would be inappropriate to
give A the 0 device/touch ID both on its first and second Down events,
you can and should implement this in your application or a satellite
library, not in SDL proper.
The low-level touch API challenges have already been faced by tablet
API designers! Keep in mind that the “pressure” field already exists
to indicate if a finger/stylus has lost contact with the screen!On Thu, Apr 2, 2009 at 12:40 PM, Holmes Futrell wrote: