I’ve read http://www.libsdl.org/tmp/SDL/README-gesture.txt and I’ve been trying to make sense of testgesture.c so that I can use pinch zooming in my iPad app, but I’m not quite sure how it works. According to the API, I can save a multitouch gesture, and then read and try to match up future input gestures to it. But there is 2 problems with this that I can see.
First, I want a general pinch gesture, whether a person has their fingers above each other, or side-to-side, or anything in between. Any time 2 fingers are put on the touch surface, and then those fingers move a distance from each other, I’d like to get an event. To do this, I don’t know if I would save a bunch of pinch gestures, each with a different vertical/horizontal orientation from each other. Or are gestures direction independent (or can they be) such that I can create a single “pinch” gesture?
Secondly, from what I can see, when an input gesture is matched from a previously saved gesture, it looks like an event is just sent. This works great for, say, performing a certain function whenever a user drew a lightning bolt symbol; one gesture triggers one action. But I’m looking for a control mechanism to allow people to pinch and zoom, which requires monitoring a gesture as it is performed and updating accordingly. I suppose I could cobble something together with a complicated set of multiple gestures, and I could link them together logically to calculate how much the pinch gesture should actually zoom, but that seems like the wrong way to do things.
Is there any example showing how to use SDL’s touch events to get pinch-zoom input?