Lost events

I hear things sometimes about “losing events”, and I’m not sure I
understand the issue well…

My experience is mainly with X11 and a bit of Win32, and from what I
remember, there is no such thing as “losing events”. The queues can
get big, and then you spend a lot of time doing XNextEvent (for
example), but you really need a lot of events before you start
having problems.

On the other hand, I see the SDL internal event queue, and I can see
that if you don’t call SDL_PumpEvents for a while, it will not miss a
single event, but it will fail to put some of them on the internal
event queue (because it will be full at some point). If you are using
the event thread feature, the problem just moves around a bit, where
there will never be a lot of events to get from the system, but if the
main program is not performing enough SDL_PollEvent (or
SDL_WaitEvent), they will be lost, again. I understand that the queue
is of a fixed size to avoid memory allocation (a good plan, IMHO!),
but still, it’s kind of annoying!

Now, my usual solution for this (considering that I have experience on
MUCH fewer platforms than those SDL supports! hence this email) is to
not do any queueing (see the “context switching” section of this very
interesting article: http://pl.atyp.us/content/tech/servers.html), and
process the events on the spot. In some systems (like X11), this also
lets the other side know how fast or slow we are, where reading the
events and throwing them away loses this information, making us look
fast when we’re really drowning (X11 combines mouse pointer motion
events, for example).

Why isn’t this how SDL works? I’m guessing there might be some
annoying platform that is supported but requires us to do that…
Couldn’t we have the queue only on those platforms? From my
experience, systems that don’t queue are more flexible, because
queueing is something you can add, but that you can’t take out when
it’s there!

Thanks for enlightening me!–