SDL_WaitEventTimeout

One of the most common control structures for applications (read: games) is
the event loop. I am sure you are all familiar with this concept. The
following code can be used to implement it:

while(!done){
gotone = SDL_WaitEventTimeout(e, TICKLEN);
if(gotone)
execute_user_command(e);
if(tick_up())
do_oncepertick_stuff();
}

This is the way the programmers are told to program on Macs, Unices, and
’doze, and for one simple reason: it works. On the Mac, the syscall that makes
this possible is WaitNextEvent, which waits until the next event or the timeout,
whichever comes first. On UNIX, one can use a variety of methods to achieve the
same effect. And I don’t want to know the Win32 API, but I have been assured
that even Microsoft recognizes the necessity of a similar syscall.
Unfortuately, libSDL lacks such a basic and necessary API. The classical way
to implement a similar control structure in libSDL code has been to only execute
user input once per tick, like this:

while(!done){
while(SDL_PollEvent(e)){
execute_user_command(e);
SDL_Delay(time_until_next_tick());
do_oncepertick_stuff();
}

which is unfortunate and violates the princples of games where input is supposed
to be handled instantly. By saying that this is the cassical way to implement
it, I mean that I have seen and even hacked on programs which do this. In order
to handle input instantly, or close thereto, it has been necessary to
gratuitously multithread. I refuse to write sample code displaying gratuitous
multithreading.

So, add the following code to your SDL_event.c file. My apologies about not
prividing a diff, but this code is short and I’m on MacOS 8.6.

int SDL_WaitEventTimeout (SDL_Event *event, Uint32 timeout)
{
Uint32 i;

for(i=0; i<timeout; i+=10 ) {
	switch(SDL_PumpEvents(event, 1, SDL_GETEVENT, SDL_ALLEVENTS)) {
		case -1: return 0;
		case 1: return 1;
		case 0: SDL_Delay(10);
	}
}

}

Unfortuately, libSDL lacks such a basic and necessary API. The
classical way
to implement a similar control structure in libSDL code has been to only
execute
user input once per tick, like this:

which is unfortunate and violates the princples of games where input is
supposed
to be handled instantly. By saying that this is the cassical way to implement
it, I mean that I have seen and even hacked on programs which do this. In
order
to handle input instantly, or close thereto, it has been necessary to
gratuitously multithread. I refuse to write sample code displaying gratuitous
multithreading.

Disregarding the usefulness of your code for a minute (Im sure it is a
worthwhile contribution), I dont see why you think input should be handled
instantly instead of once per frame. For even a simple 20fps game, can the
player really tell the difference between “instantly” and (on average)
1/40th of a second later? I simply run a loop that handles the input, does
the frame draw, flips, then waits for the next frame, and have had no
problems or complaints about input with this approach. What does everyone
else think on this matter?

Neil.

One of the most common control structures for applications (read: games) is
the event loop. I am sure you are all familiar with this concept. The
following code can be used to implement it:

while(!done){
gotone = SDL_WaitEventTimeout(e, TICKLEN);
if(gotone)
execute_user_command(e);
if(tick_up())
do_oncepertick_stuff();
}

This is the way the programmers are told to program on Macs, Unices, and
’doze, and for one simple reason: it works. On the Mac, the syscall that makes
this possible is WaitNextEvent, which waits until the next event or the timeout,
whichever comes first. On UNIX, one can use a variety of methods to achieve the
same effect. And I don’t want to know the Win32 API, but I have been assured
that even Microsoft recognizes the necessity of a similar syscall.
Unfortuately, libSDL lacks such a basic and necessary API. The classical way
to implement a similar control structure in libSDL code has been to only execute
user input once per tick, like this:

while(!done){
while(SDL_PollEvent(e)){
execute_user_command(e);
SDL_Delay(time_until_next_tick());
do_oncepertick_stuff();
}

which is unfortunate and violates the princples of games where input is supposed
to be handled instantly. By saying that this is the cassical way to implement
it, I mean that I have seen and even hacked on programs which do this. In order
to handle input instantly, or close thereto, it has been necessary to
gratuitously multithread. I refuse to write sample code displaying gratuitous
multithreading.

No. Sorry, but your animation loop is incorrect. It does have the
nasty properties that you mention, but only because it is incorrect. Try
this one. It works.

while (!done)
{
while (!done && SDL_PollEvent(&event))
{
switch (event.type)
{
process an event
}
}

When you reach here you have processed all pending events
and it is safe to draw the next frame. All user input 
is processed before a frame is drawn.

}

So, add the following code to your SDL_event.c file. My apologies about not
prividing a diff, but this code is short and I’m on MacOS 8.6.

int SDL_WaitEventTimeout (SDL_Event *event, Uint32 timeout)
{
Uint32 i;

for(i=0; i<timeout; i+=10 ) {
switch(SDL_PumpEvents(event, 1, SDL_GETEVENT, SDL_ALLEVENTS)) {
case -1: return 0;
case 1: return 1;
case 0: SDL_Delay(10);
}
}
}

You are confusing two separate things, event processing, and frame rate
throttling, and trying to deal with them both at the same time. That
doesn’t work. You need to process all pending events at the top of your
animation loop and only throttle if you need to. Two separate problems
that must be handled separately.

The SDL_WaitEvent() function is used when you want to wait for an
event. SDL_PollEvent() is used when you want to process pending events,
and not block. SDL_PollEvent() is a more general solution to the problem
than what you have presented. It has the nice property that it lets your
program continue immediately when you are done processing events.

The function you have presented will let you process all events in a
loop and then when all events have been processed, it waits for the
entire timeout period before it lets you go on and do useful work. And,
of course, the timeout period is added to the total time used to process
events. (Unless you recompute the timeout period each time you call the
function.) No matter what you do to compute the timeout period, your
function adds an average of 5 milliseconds to the time needed to process
events every time you process events. And, because it waits so long, a
user typing at the keyboard or moving the mouse can lock you into the
event processing loop for an unlimited period of time, preventing your
program from ever drawing another frame.

		Bob PendletonOn Fri, 2003-10-31 at 11:49, tfolzdon at student.umass.edu wrote:

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±--------------------------------------+

[…]

Disregarding the usefulness of your code for a minute (Im sure it
is a worthwhile contribution), I dont see why you think input
should be handled instantly instead of once per frame. For even a
simple 20fps game, can the player really tell the difference
between “instantly” and (on average) 1/40th of a second later?

Well, it depends. I don’t think you’ll notice the difference if the
graphics and game logic runs at a fixed 20 fps, but if the logic
frame rate is separated from the rendering frame rate, you may run
into two problems:

1) The rendering frame rate, and thus the "evaluation
   rate" (ie the rate of event checking, advancing
   the game logic etc) is sometimes too low for
   reliable input. Input events that should belong to
   different logic frames may be grouped together and
   applied to the same logic frame.

2) The rendering frame rate is very high, and graphics
   is interpolated from the lower logic frame rate.
   If the logic frame rate is relatively low, this may
   cause the game to feel sluggish, as there is a
   fixed delay between input and reactions that
   doesn't match the rendering frame rate.

The second one isn’t a major problem, and I think you’d need a rather
low logic frame rate for your average player to notice the delay.
That said, Kobo Deluxe uses a 33.333 Hz logic frame rate +
interpolation, and with 80+ fps, the 60 ms input latency is obvious
enough to me that I’ll probably do something about it eventually.

As to the first issue; low frame rates, you could get around that by
running input in a separate thread, timestamping the events as you
receive them. That way - if you’re on a reasonably nice OS - you can
have the game process events at the right logic time, even if the
rendering (and thus, the actual event/logic processing) doesn’t keep
up.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 31 October 2003 21.16, Neil Brown wrote:

Bob Pendleton pointed out a bug in my sample code. Should have been:
(in pseudocode)
while(!done){
gotone = SDL_WaitEventTimeout(e, nextick - SDL_GetTicks());
if(gotone){
execute_user_input(e);
}
if((time = SDL_GetTicks()) >= nextick){
if(time >= nextick + ticklen)
nextick = time + ticklen;
else
nextick += TICKLEN;
do_once_per_tick_stuff();
}
}

Neil Brown writes:

…I dont see why you think input
should be handled instantly instead of once per frame. For even a
simple 20fps game, can the player really tell the difference
between “instantly” and (on average) 1/40th of a second later?
Yes, taking all the input that occurs during one frame and processing it
all in a batch before the next would look just like if the input had been
dealt with as it occured.
The problem is when your game is simple enough not to need any internal
concept of framerate. In a tetris, you don’t need to repeatedly render the
same picture ten times before the game drops the piece itself if the user
doesn’t press a button, as would happen if it was rendered at 20fps and the
piece drop delay was 200ms, which is just about as short as is playable for
me.
My users have noted a delay between pressing buttons and things happening.
The event loop I’m using looks (kinda) like this (pre-rewrite):

do{ /* main loop */
looptime=SDL_GetTicks();
while(SDL_PollEvent(&ev))
execute_user_input(ev);
if(!paused)
timer++;
if(timer = ticks_to_drop_the_piece){
piece_height–;
timer = 0;
do_stuff_if_the_piece_is_on_the_ground();
}
draw_to_screen();
if(SDL_GetTicks() < looptime + 50)
SDL_Delay(looptime + 50 - SDL_GetTicks());
} while (!done);

Now, actually, my tetris does need to draw every 50ms, because it
changes the colors of everything every 50ms (red=sin(t), green=sin(t+2pi/3),
blue=sin(t+4pi/3)). A framerate of 20fps is sufficient for the drawing, but
apparently not for my users. Should I increase the framerate, which will
only increase CPU usage?
And suppose I supported classical coloration of tetris pieces, like I am
trying to with my rewrite? (besides trying to make the code non-butt-ugly)
Classical tetris piece coloration will only require a “framerate” of however
long it takes for (1) the user to press a button, or (2) the time until a
piece drops without the user doing anything.
The point of moving that bit of code into the library instead of it being
in my game (and in the other SDL tetris game I saw, and probably in many
other SDL games) is to make a more efficient implementation possible. The
hack that I wrote for SDL is perhaps not the most efficient way of writing
it, I’m going to look more thoroughly into the way libsdl handles events
when I get back from raking leaves.

As to the first issue; low frame rates, you could get around that by
running input in a separate thread, timestamping the events as you
receive them. That way - if you’re on a reasonably nice OS - you can
have the game process events at the right logic time, even if the
rendering (and thus, the actual event/logic processing) doesn’t keep
up.
As I understand it, libSDL keeps a thread for input of it’s own. So, I
would be interposing a thread between libSDL’s event thread and my
application’s thread that actually does stuff, a gratuitous waste of system
resources.
Another problem with multithreading is correctness. In Quake, drawing is
done as quickly as possible, churning out frames. The logic is rather
independent of the drawing, doing AI here and noting that an explosion
occurs over there, and informing the drawing routine thereof. And input
happens when it happens. I don’t know if ID Software actually implemented
Quake as a multithreaded application, in fact, as it runs under MacOS, I
rather doubt that Quake is multithreaded. But it would make sense to
multithread it. However, input, logic and drawing are not asynchronous in
Tetris. Nor are they asynchronous in Space Invaders, PacMan, or a variety
of other types of program.
Which is why I think such an function is necessary to at least my game’s
logic, and why I think most APIs (UNIX, MacOS, 'doze, etc.) include such a
call.

[…]

Now, actually, my tetris does need to draw every 50ms, because
it changes the colors of everything every 50ms (red=sin(t),
green=sin(t+2pi/3), blue=sin(t+4pi/3)). A framerate of 20fps is
sufficient for the drawing, but apparently not for my users.
Should I increase the framerate, which will only increase CPU
usage?

So why don’t you just handle the color changes like any other game
logic event, so that they cause video updates? You can only have one
rendering frame rate, but it’s not required to be constant or
locked to anything in particular. Just update whenever you need to -
which may include directly after receiving an input event.

[…]

As I understand it, libSDL keeps a thread for input of it’s own.

On most platforms, yes, AFAIK.

So, I would be interposing a thread between libSDL’s event thread
and my application’s thread that actually does stuff, a gratuitous
waste of system resources.

Yeah - but SDL doesn’t timestamp events, so if you need event input
timing to be more accurate than the logic and/or rendering frame rate
of your game, this is the only way. If you poll the event queue in
the same thread that does the rendering, your input timeline will get
"holes" of the size it takes to render one frame.

However, in most cases, this is not an issue. If the rendering frame
rate is so low that it could interferes with input, the game is
usually not playable anyway.

Another problem with multithreading is correctness.

Well, yeah - if you want your events timestamped, you have to pass
them to your main thread through a lock-free FIFO or something like
that… Quite trivial, actually. Either way, it’s still something you
most probably won’t need to do.

In Quake,
drawing is done as quickly as possible, churning out frames.

Except that the monitor refresh rate should limit the maximum frame
rate on proper setups.

The
logic is rather independent of the drawing, doing AI here and
noting that an explosion occurs over there, and informing the
drawing routine thereof. And input happens when it happens. I
don’t know if ID Software actually implemented Quake as a
multithreaded application, in fact, as it runs under MacOS, I
rather doubt that Quake is multithreaded.

I don’t know if it has separate logic and rendering threads, but I
strongly doubt it. If logic ran in a separate thread, the venom gun
in RTCW (which uses the Q3A engine) wouldn’t lose fire rate if the
rendering frame rate drops too low - but that’s exactly what happens.
(That is, RTCW doesn’t even handle progress of time properly in such
cases. Logic events are lost if the frame rate is too low.)

But it would make sense
to multithread it.

Not really; at least not considering what this thread is about. The
venom gun problem in RTCW is a bug, and fixing it wouldn’t require a
separate logic thread.

Kobo Deluxe uses only one thread for logic and rendering, and it still
plays correctly down to 1 Hz - though the game is obviously
unplayable at that frame rate, because you can’t see what’s going on,
and because input events are quantized to 1 Hz.

However, input, logic and drawing are not
asynchronous in Tetris. Nor are they asynchronous in Space
Invaders, PacMan, or a variety of other types of program.

Nope. And there’s no need for that, as those games just maintain a
sufficient frame rate to keep animation smoothness and input latency
at acceptable levels. You can do that by either updating at a
sufficient rate (could be fixed, or the highest possible rendering
rate), or by updating whenever something needs updating.

Which is why I think such an function is necessary to at least my
game’s logic, and why I think most APIs (UNIX, MacOS, 'doze, etc.)
include such a call.

I think the “proper” way would be to go completely event driven, and
just use SDL_WaitEvent(). Use timers to generate events for
"spontaneous" things like color cycling, animations and stuff. Using
timeouts to actually generate timing tends to be hairy, flaky and
inaccurate no matter how you implement it.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Saturday 01 November 2003 20.12, tfolzdon at student.umass.edu wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi!
Just a little remark, but I could be wrong.Am Sonntag, 2. November 2003 04:04 schrieb David Olofson:

I don’t know if it has separate logic and rendering threads, but I
strongly doubt it.

Well, actually one of the main hypes about Quake 3 on Linux was it’s
capability to extremely profit from a second CPU. I don’t see how this could
be the case having rendering and game logic in the same thread.
Could be wrong on this so, as I don’t have that game.

regards
Matthias Bach


Matthias Bach | GPG/PGP-Key-ID: 0xACA73EC9
www.marix-world.de | On Keyserver: www.keyserver.net
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE/pPu1lnJmS6ynPskRAoXTAJ9NMv8iRIOOM1l61mWoLr2qacAjaACcD16T
eQgOTrL+szNJ9pN7p5u230Y=
=6VRO
-----END PGP SIGNATURE-----

This correction only corrects a small part of what I consider to be
wrong with your code. The implementation of your version of
waiteventtimeout had serious problems and leads you to making the kind
of error that was in your original code.

Take a look at this bit of psuedocode as an alternative to what you have
written

nextTick = SDL_GetTicks() + ticklen;
while (!done)
{
while (SDL_PollEvent(e))
{
process_event(e);
}

process_once_per_tick_stuff();

while (ticklen > SDL_GetTicks())
{
	SDL_Delay(5); // Delays an *average* of 5 milliseconds
}

nextTick += ticklen;

}

It does pretty much what your loop does. (To me it is a lot easier to
read, YMMV :slight_smile: It processes all pending events before updating. That
way all user input is always reflected in the next update. Then it
performs an update. Only then does it throttle the frame rate by
waiting.

Always processing all user input before an update is a requirement for
getting a good responsive feel in the game. User input handling and
updating usually take a variable amount of time. This loop adjusts to
that while still giving you an average frame rate near your ideal frame
rate. The thing is that you might get caught waiting for a frame to flip
or waiting while the OS steals the CPU for other operations. So, if you
don’t process an event right now it may be as much as a few hundred
milliseconds before you get a chance to process it and that amount of
time is visible to humans.

This is close to the way I like to code, but I still am not completely
happy with it. Because you don’t really know when you are going to get
the CPU you can’t count on doing things on a fixed time basis. What
happens if my loop gets blocked for a few ticklens of time? The loop
will race to catch up causing a visible glitch in the animation. IMHOP
’tis best to do everything based on elapsed time and throttle when the
frames are coming to fast to be visible.

For a better description of what I mean take a look at my articles on
animation with SDL at:

http://linux.oreillynet.com/pub/a/linux/2003/05/15/sdl_anim.html
http://linux.oreillynet.com/pub/a/linux/2003/08/07/sdl_anim.html
http://linux.oreillynet.com/pub/a/linux/2003/10/23/sdl_anim.html

I believe these are also linked to from libsdl.org.

	Bob PendletonOn Sat, 2003-11-01 at 13:12, tfolzdon at student.umass.edu wrote:

Bob Pendleton pointed out a bug in my sample code. Should have been:
(in pseudocode)
while(!done){
gotone = SDL_WaitEventTimeout(e, nextick - SDL_GetTicks());
if(gotone){
execute_user_input(e);
}
if((time = SDL_GetTicks()) >= nextick){
if(time >= nextick + ticklen)
nextick = time + ticklen;
else
nextick += TICKLEN;
do_once_per_tick_stuff();
}
}


±--------------------------------------+

Hi!
Just a little remark, but I could be wrong.

I don’t know if it has separate logic and rendering threads, but
I strongly doubt it.

Well, actually one of the main hypes about Quake 3 on Linux was
it’s capability to extremely profit from a second CPU. I don’t see
how this could be the case having rendering and game logic in the
same thread. Could be wrong on this so, as I don’t have that game.

I’ve heard about that as well, but I don’t know how the work is
dispatched. It doesn’t have to be a logic/rendering split, and unless
the game actually has a logic frame rate, that kind of split
wouldn’t make much sense. (The main point with delta time based
calculations is to evaluate game logic only once per rendered frame.)

All I know is that the Q3 engine based games I’ve happened to play on
underpowered machines tend to behave as if logic is only evaluated
once per rendered frame. (And without considering low frame rates, in
the case of RTCW - otherwise it wouldn’t have made much of a
difference.) I can only make qualified guesses regarding the reasons
why it behaves like that.

Here’s another guess, BTW: Game logic does run in it’s own thread,
but user input runs in the main (rendering) loop. The issues with
high speed automatic weapons would then be explained by the
"autofire" being implemented in the input code, rather than in the
game logic, quantizing “fire” events to the rendering frame rate.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Sunday 02 November 2003 13.42, Matthias Bach wrote:

Am Sonntag, 2. November 2003 04:04 schrieb David Olofson:

Well, it seems to me for applications like word or whatever, this scheme is
fine. Because basically things only change when the user does
something. But games need to run as fast as possible… and things change
even if the user isn’t doing input. Do you think someone playing quake
would want to only see things update whenever they did a command? Without
also multithreading in your scheme, it seems like your renderloop would
stall while waiting for user input.

Or I could just be talking out of my ass. I dunno.

At 11:49 AM 10/31/2003, you wrote:

One of the most common control structures for applications (read: games) is
the event loop. I am sure you are all familiar with this concept. The
following code can be used to implement it:

while(!done){
gotone = SDL_WaitEventTimeout(e, TICKLEN);
if(gotone)
execute_user_command(e);
if(tick_up())
do_oncepertick_stuff();
}
snip

Well, it seems to me for applications like word or whatever, this
scheme is fine. Because basically things only change when the user
does something. But games need to run as fast as possible… and
things change even if the user isn’t doing input. Do you think
someone playing quake would want to only see things update whenever
they did a command?

OTOH, who cares if Quake renders more than one frame per CRT refresh?
:wink: (In fact, that’s worse than one per refresh, as it causes
tearing.)

What I’m saying is that all normal applications, including all kinds
of games, respond to events in one form or another. The retrace sync
is just another event that you should preferably use as you time base
if you want smooth animation.

Note that the case where you can’t get retrace sync is just a special
case, which forces you to rely on some kind of hack. The most common
solution is to just ignore the problem and have the engine run as
fast as it can, as if there always was a pending retrace sync
"event".

Without also multithreading in your scheme, it
seems like your renderloop would stall while waiting for user
input.

I guess that’s intended, but there are better ways of implementing
"timer controlled animation", if you really don’t want to rely on
retrace sync, or render at full speed if retrace sync is not
available.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Friday 31 October 2003 19.45, brian wrote:

Hello David, and all others, I’m new to this list.
I subscribed because I’m having a problem with missing keys, and I expect
to have some help here for more experienced SDL developers.

I work for a company that makes mp3 jukeboxes, mounted on arcade-like
machines. We also make a MAME-based arcade. As a hobby, I also develop
a SDL-based RTS game: http://palito.9hells.org/

In a new version of our jukebox, we use a barcode reader, that connects
to the keyboard input, and sends a sequence of digits, just like
regular keystrokes, when it reads a barcode card.

In my main loop, I do SDL_PollEvent(), and in case there’s no event,
I do a SDL_Delay(1), just to prevent unecessary 100% CPU usage.
To my surprise, SDL is missing some keys! That shouldn’t happen,
event if my event loop is not well designed, and is lagging to
catch the events, because (I think) there is an event queue (buffer),
right? If I remove the SDL_Delay(), this doesn’t happen.

Regards,
barrett

now it gets worse. i reduced the loop to this:

    while(1) { 
            SDL_Event event; 
            if(SDL_PollEvent(&event)) {
                    if(event.type == SDL_QUIT) 
                            break;
                    if(event.type == SDL_KEYDOWN) {
                            int k = event.key.keysym.sym;
                            printf("keydown %d (%c)\n",
                                            k, isprint(k) ? k : '.');
                    }
            }
    }

and it still missing keys sometimes!
how can this happen?

regards,
barrettOn Thu, Nov 06, 2003 at 04:51:40PM -0200, @barrett_at_9hells.or wrote:

In my main loop, I do SDL_PollEvent(), and in case there’s no event,
I do a SDL_Delay(1), just to prevent unecessary 100% CPU usage.
To my surprise, SDL is missing some keys! That shouldn’t happen,
event if my event loop is not well designed, and is lagging to
catch the events, because (I think) there is an event queue (buffer),
right? If I remove the SDL_Delay(), this doesn’t happen.

Hello David, and all others, I’m new to this list.
I subscribed because I’m having a problem with missing keys, and I expect
to have some help here for more experienced SDL developers.

I work for a company that makes mp3 jukeboxes, mounted on arcade-like
machines. We also make a MAME-based arcade. As a hobby, I also develop
a SDL-based RTS game: http://palito.9hells.org/

In a new version of our jukebox, we use a barcode reader, that connects
to the keyboard input, and sends a sequence of digits, just like
regular keystrokes, when it reads a barcode card.

In my main loop, I do SDL_PollEvent(), and in case there’s no event,
I do a SDL_Delay(1), just to prevent unecessary 100% CPU usage.
To my surprise, SDL is missing some keys! That shouldn’t happen,
event if my event loop is not well designed, and is lagging to
catch the events, because (I think) there is an event queue (buffer),
right? If I remove the SDL_Delay(), this doesn’t happen.

You need to let us know which OS you are running on so we can properly
answer the question. Some points to consider.

o There is an event queue. If it fills up SDL quietly drops the events.

o On most OSes a SDL_Delay(1) will wait for an average of 5
milliseconds. That happens because the clock ticks every 10
milliseconds. In a tight loop such as the one you describe the delay
will tend to synch up with the clock and you will usually delay slightly
less than 10 milliseconds.

o Event handling is OS specific so, to really understand what is going
on you have to look at the code for the OS you are using.

My guess is that the bar code scanner is send keys very quickly and
some of them are getting dropped by the OS. How many characters are
being sent? Is the OS buffering them too? If so, how big is its buffer?
Is it big enough to hold all the data the card reader is sending? If
not, the characters are mostly like being dropped by the OS during the
long wait for SDL_Delay(1) to return.

		Bob PendletonOn Thu, 2003-11-06 at 12:51, barrett at 9hells.org wrote:

Regards,
barrett


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±--------------------------------------+

Hey Barrett, are you pressing more than 1 key at a time?

if so check this out, and if not check it out too…useful knowledge to have

http://sjbaker.org/steve/omniv/keyboards_are_evil.html> ----- Original Message -----

From: barrett@9hells.org ()
To:
Sent: Thursday, November 06, 2003 11:45 AM
Subject: Re: [SDL] SDL_WaitEventTimeout

On Thu, Nov 06, 2003 at 04:51:40PM -0200, barrett at 9hells.org wrote:

In my main loop, I do SDL_PollEvent(), and in case there’s no event,
I do a SDL_Delay(1), just to prevent unecessary 100% CPU usage.
To my surprise, SDL is missing some keys! That shouldn’t happen,
event if my event loop is not well designed, and is lagging to
catch the events, because (I think) there is an event queue (buffer),
right? If I remove the SDL_Delay(), this doesn’t happen.

now it gets worse. i reduced the loop to this:

    while(1) {
            SDL_Event event;
            if(SDL_PollEvent(&event)) {
                    if(event.type == SDL_QUIT)
                            break;
                    if(event.type == SDL_KEYDOWN) {
                            int k = event.key.keysym.sym;
                            printf("keydown %d (%c)\n",
                                            k, isprint(k) ? k : '.');
                    }
            }
    }

and it still missing keys sometimes!
how can this happen?

regards,
barrett


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

nevermind on my last post!

Didnt notice you were using a scanner that was sending key events.> ----- Original Message -----

From: @atrix2 (atrix2)
To:
Sent: Thursday, November 06, 2003 11:53 AM
Subject: Re: [SDL] SDL_WaitEventTimeout

Hey Barrett, are you pressing more than 1 key at a time?

if so check this out, and if not check it out too…useful knowledge to
have

http://sjbaker.org/steve/omniv/keyboards_are_evil.html

----- Original Message -----
From: <barrett at 9hells.org>
To:
Sent: Thursday, November 06, 2003 11:45 AM
Subject: Re: [SDL] SDL_WaitEventTimeout

On Thu, Nov 06, 2003 at 04:51:40PM -0200, barrett at 9hells.org wrote:

In my main loop, I do SDL_PollEvent(), and in case there’s no event,
I do a SDL_Delay(1), just to prevent unecessary 100% CPU usage.
To my surprise, SDL is missing some keys! That shouldn’t happen,
event if my event loop is not well designed, and is lagging to
catch the events, because (I think) there is an event queue (buffer),
right? If I remove the SDL_Delay(), this doesn’t happen.

now it gets worse. i reduced the loop to this:

    while(1) {
            SDL_Event event;
            if(SDL_PollEvent(&event)) {
                    if(event.type == SDL_QUIT)
                            break;
                    if(event.type == SDL_KEYDOWN) {
                            int k = event.key.keysym.sym;
                            printf("keydown %d (%c)\n",
                                            k, isprint(k) ? k :

‘.’);

                    }
            }
    }

and it still missing keys sometimes!
how can this happen?

regards,
barrett


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

You need to let us know which OS you are running on so we can properly
answer the question. Some points to consider.

Linux 2.4.21

o There is an event queue. If it fills up SDL quietly drops the events.

I looked at the src/events/SDL_events.c, it seems that the queue holds
128 events. The barcode reader sends 11 chars, so 22 events, that
shouldn’t be a problem.

o On most OSes a SDL_Delay(1) will wait for an average of 5
milliseconds. That happens because the clock ticks every 10
milliseconds. In a tight loop such as the one you describe the delay
will tend to synch up with the clock and you will usually delay slightly
less than 10 milliseconds.

Well, I’m not able to pass the barcode cards twice in less then 50ms.

o Event handling is OS specific so, to really understand what is going
on you have to look at the code for the OS you are using.

My guess is that the bar code scanner is send keys very quickly and
some of them are getting dropped by the OS. How many characters are
being sent? Is the OS buffering them too? If so, how big is its buffer?
Is it big enough to hold all the data the card reader is sending? If
not, the characters are mostly like being dropped by the OS during the
long wait for SDL_Delay(1) to return.

I tested it in the console, and it never miss a key.

  	Bob Pendleton

Thanks,
barrett.On Thu, Nov 06, 2003 at 01:47:49PM -0600, Bob Pendleton wrote:

might the scanner be defective and not be sending all the keys?> ----- Original Message -----

From: barrett@9hells.org ()
To:
Sent: Thursday, November 06, 2003 12:41 PM
Subject: Re: [SDL] SDL_WaitEventTimeout

On Thu, Nov 06, 2003 at 01:47:49PM -0600, Bob Pendleton wrote:

You need to let us know which OS you are running on so we can properly
answer the question. Some points to consider.

Linux 2.4.21

o There is an event queue. If it fills up SDL quietly drops the events.

I looked at the src/events/SDL_events.c, it seems that the queue holds
128 events. The barcode reader sends 11 chars, so 22 events, that
shouldn’t be a problem.

o On most OSes a SDL_Delay(1) will wait for an average of 5
milliseconds. That happens because the clock ticks every 10
milliseconds. In a tight loop such as the one you describe the delay
will tend to synch up with the clock and you will usually delay slightly
less than 10 milliseconds.

Well, I’m not able to pass the barcode cards twice in less then 50ms.

o Event handling is OS specific so, to really understand what is going
on you have to look at the code for the OS you are using.

My guess is that the bar code scanner is send keys very quickly and
some of them are getting dropped by the OS. How many characters are
being sent? Is the OS buffering them too? If so, how big is its buffer?
Is it big enough to hold all the data the card reader is sending? If
not, the characters are mostly like being dropped by the OS during the
long wait for SDL_Delay(1) to return.

I tested it in the console, and it never miss a key.

Bob Pendleton

Thanks,
barrett.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Opps!! I tried it many more times, and presto!
It missed keys on the console, so SDL is not to blame.
And I guess there is nothing I can do either, other than displaying
"error reading the card, please insert it again".

Sorry to have wasted your time here… :slight_smile:

Best regards,
barrett.On Thu, Nov 06, 2003 at 06:41:48PM -0200, @barrett_at_9hells.or wrote:

I tested it in the console, and it never miss a key.

You need to let us know which OS you are running on so we can properly
answer the question. Some points to consider.

Linux 2.4.21

Make sure you are using SDL_INIT_EVENTTHREAD when you initialize SDL.On Thu, 2003-11-06 at 14:41, barrett at 9hells.org wrote:

On Thu, Nov 06, 2003 at 01:47:49PM -0600, Bob Pendleton wrote:

o There is an event queue. If it fills up SDL quietly drops the events.

I looked at the src/events/SDL_events.c, it seems that the queue holds
128 events. The barcode reader sends 11 chars, so 22 events, that
shouldn’t be a problem.

o On most OSes a SDL_Delay(1) will wait for an average of 5
milliseconds. That happens because the clock ticks every 10
milliseconds. In a tight loop such as the one you describe the delay
will tend to synch up with the clock and you will usually delay slightly
less than 10 milliseconds.

Well, I’m not able to pass the barcode cards twice in less then 50ms.

o Event handling is OS specific so, to really understand what is going
on you have to look at the code for the OS you are using.

My guess is that the bar code scanner is send keys very quickly and
some of them are getting dropped by the OS. How many characters are
being sent? Is the OS buffering them too? If so, how big is its buffer?
Is it big enough to hold all the data the card reader is sending? If
not, the characters are mostly like being dropped by the OS during the
long wait for SDL_Delay(1) to return.

I tested it in the console, and it never miss a key.

  	Bob Pendleton

Thanks,
barrett.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±--------------------------------------+