Threading Problem

Hi all,

I got two very interesting and different responses to my
threading problem. The first suggestion is to insert delays
into my code to free up the system multitasker. The second
suggestion is to drop the second thread and use the
millisecond timer to schedule my game logic and rendering.

Both suggestions are good in their own way, and reveal
something about threads I hadn’t really given much thought
about.

I’m going to experiment along both lines and see where it
leads. I’ve already followed the second line of thought a
bit, but by no means exhaustively. And I’m not quite ready
to give up on threads just yet - despite the overhead.

Along the lines of doing my own scheduling, well, there is
quite a bit of that already in my game engine (which is in-
tended to be as multi-purpose as possible.) Each individual
sprite maintains its own counter so that it can update as
often or as seldom as necessary. It’s as granular as it can
be, so that - theoretically - a sprite can update as often
as once per millisecond tick. If a game doesn’t need that
fine a grain then letting the thread sleep for 5 or 10ms
after each iteration would be alright.

I do recall having SDL_Wait calls in there at some point
(in the rendering thread, I believe) but I dumped them
as the engine started gaining some speed with threads.

The main reason I originally settled on a threaded model
was so that I could take advantage of the extra time between
a call to SDL_SwapBuffers() and the next VSYNC. Since we
don’t know what the refresh rate of the screen might be
(via SDL) there could be up to 59.99 60ths of a second lost
between the call and the actual buffer swap. In other words
the frame rate will drop from 60fps to 30fps once the game
logic takes just a smidgen longer than 1/60th of a second.

To be fair, I’m not sure how long it takes for an entire
loop through the game logic. (But it’s a bit faster since
I modified the pixel-precise level of my collision code to
use arrays in place of calling sin, cos, and sqrt!)
My one condition is that an entire “frame” of game logic
has to be run-through before rendering. No bailing out in
the middle.

I’ll continue to tweak and see where it leads. I dig the
"hands-off" nature of threads, so I hope I get to keep
them. Even with the added overhead my game engine handles
hundreds of colliding objects easily.–
Scott Lahteine <@Scott_Lahteine>
“No universe is perfect which leaves no room for improvement.”

Along the lines of doing my own scheduling, well, there is
quite a bit of that already in my game engine (which is in-
tended to be as multi-purpose as possible.) Each individual
sprite maintains its own counter so that it can update as
often or as seldom as necessary. It’s as granular as it can
be, so that - theoretically - a sprite can update as often
as once per millisecond tick. If a game doesn’t need that
fine a grain then letting the thread sleep for 5 or 10ms
after each iteration would be alright.

5ms? In NT, you might get that precision (actually, I found that Sleep()
will give me 2ms delays happily in NT), but on most systems, you’ll never
get sub-10ms delays due to scheduling granularity.

The main reason I originally settled on a threaded model
was so that I could take advantage of the extra time between
a call to SDL_SwapBuffers() and the next VSYNC. Since we

Be careful; you might not actually get any time during the buffer swap,
depending on how the drivers are implemented–they might be busy looping
waiting for vsync. I don’t know enough about drivers on any arch to
comment further (and it’s likely different on different archs).

Also, some advice: Threads suck. It’s extremely difficult to write robust
threaded programs; an order of magnitude more so than single-threaded
programs. I avoid threads unless they’re essential (eg. sound code). Even
in this case, where your two threads are running synchronously–no actual
parallelism except during the buffer swap–you’re having problems, and it
only gets worse when a program hits varied systems.

don’t know what the refresh rate of the screen might be
(via SDL) there could be up to 59.99 60ths of a second lost
between the call and the actual buffer swap. In other words
the frame rate will drop from 60fps to 30fps once the game
logic takes just a smidgen longer than 1/60th of a second.

Forget fragment shaders: OpenGL 2.0 had better give me triple-buffering. I’m
tired of people complaining that our program is running very slowly due to a
recent d3d->ogl conversion, just because we’ve gone from triple- to double-
buffering.

I’ll continue to tweak and see where it leads. I dig the
“hands-off” nature of threads, so I hope I get to keep
them. Even with the added overhead my game engine handles
hundreds of colliding objects easily.

Threads look neat, but actually using them is hell. :slight_smile: (But, by all
means keep playing with them; I’m just registering my experience.)On Wed, Apr 09, 2003 at 12:07:48AM -0700, Scott Lahteine wrote:


Glenn Maynard