GSoC touch input idea

Hi!

I’m a spanish student. I was reading the ideas to GSoC and I saw the
Touch Input. I thought it could be interesting so I downloaded the svn
code. I was playing with the code and I saw that the API has been
changed. I also saw that there is code to work with digital tablets
with preasure, etc. I don’t understand very well the idea, but I
thought about creating an auxiliar library like SDL_mixer o SDL_mixer,
called SDL_touch, which is able to recognize some gestores and
maybe… caracters too. Later multitouch gestures can be implemented.

I don’t know if this was the original idea so I’d like someone to
explain me more about it. Anyway I’d also want to know your opinion
about my idea.

Thanks in advance.

2009/3/23 Alejandro Casta?o del Castillo :

I’m a spanish student. I was reading the ideas to GSoC and I saw the
Touch Input. I thought it could be interesting so I downloaded the svn
code. I was playing with the code and I saw that the API has been
changed.

Exactly what are you referring to?

I also saw that there is code to work with digital tablets
with preasure, etc. I don’t understand very well the idea, but I
thought about creating an auxiliar library like SDL_mixer o SDL_mixer,
called SDL_touch, which is able to recognize some gestores and
maybe… caracters too. Later multitouch gestures can be implemented.

For a predetermined number of simultaneous sensed contact points (like
iPhone which has a max of two, or so I have been told) SDL’s back-end
for “pointer device” input would probably need to be changed slightly,
unless of course the API provided for accessing the touch sensor
simply overloads the platform’s existing pointer device APIs.

I don’t know if this was the original idea so I’d like someone to
explain me more about it. Anyway I’d also want to know your opinion
about my idea.

I believe most of these systems use machine learning. Specifically I
believe simple FF neural nets are typically used. Sounds like an
exciting project, actually. I’d love to help, but I don’t have any
multitouch-capable devices with which to test.

(Although actually, I do have the supplies to make one of these
http://www.youtube.com/watch?v=gFKCmfj-yuc but if I made one of those,
I wouldn’t want a formal pointer API sitting between me and my
multitouch data, since I could potentially get much cooler information
by directly accessing the webcam’s framebuffer.)–
http://codebad.com/

2009/3/23 Donny Viszneki <donny.viszneki at gmail.com>:

2009/3/23 Alejandro Casta?o del Castillo <@Alejandro_Castano_de>:

I’m a spanish student. I was reading the ideas to GSoC and I saw the
Touch Input. I thought it could be interesting so I downloaded the svn
code. I was playing with the code and I saw that the API has been
changed.

Exactly what are you referring to?

I write some games with SDL 1.2. I can see several changes in the
svn code like multiple mice support.

I also saw that there is code to work with digital tablets
with preasure, etc. I don’t understand very well the idea, but I
thought about creating an auxiliar library like SDL_mixer o SDL_mixer,
called SDL_touch, which is able to recognize some gestores and
maybe… caracters too. Later multitouch gestures can be implemented.

For a predetermined number of simultaneous sensed contact points (like
iPhone which has a max of two, or so I have been told) SDL’s back-end
for “pointer device” input would probably need to be changed slightly,
unless of course the API provided for accessing the touch sensor
simply overloads the platform’s existing pointer device APIs.

I don’t know how multitouch technology works, but I read in some webs that
at least in X11, multitouch are “like two mice”. So I thought no-changes is
needed in the core SDL. In any case I can work in SDL input, it seems
mice supports is a little “broken”. I can’t make work some functions, for
example SDL_WarpMouse.

I don’t know if this was the original idea so I’d like someone to
explain me more about it. Anyway I’d also want to know your opinion
about my idea.

I believe most of these systems use machine learning. Specifically I
believe simple FF neural nets are typically used. Sounds like an
exciting project, actually. I’d love to help, but I don’t have any
multitouch-capable devices with which to test.

Probably you are right, but I’m not sure neither. I’d like Sam (or
the person who wrotes the idea) explains what really he wants.
I really want to work in SDL because I used it a lot. I read all
ideas but this it looks like more interesting, but I don’t care to work
in Recording or Multi-Display support.

(Although actually, I do have the supplies to make one of these
http://www.youtube.com/watch?v=gFKCmfj-yuc but if I made one of those,
I wouldn’t want a formal pointer API sitting between me and my
multitouch data, since I could potentially get much cooler information
by directly accessing the webcam’s framebuffer.)

PD: Is any easy bug to try fix it and to familiarized with the code?

If you are trying to use SDL to read multitouch information, then
there are some existing applications that do the heavy lifting for
you. T-Beta?http://tbeta.nuigroup.com/ or Touchlib
http://nuigroup.com/touchlib/ both read camera inputs and convert this
data to TUIO messages. http://tuio.org/ These messages contain
information data (e.g. coordinates and blob size) and touch events
(e.g. finger down, moved and released).

You can read these in over the network using SDL’s networking
functions. This gets around the limitation of having several
"cursors", as you are getting raw touch data.

If you are trying to get SDL to read camera input and then convert
that to multitouch data (in the case of vision-based multitouch
systems), then I’m not really sure how to go about this, as it would
mean complex image processing.