Blitting / event handling with multiple threads

Christian Morgner <christian.morgner at postamt.cs.uni-dortmund.de> wrote:

Now the docs say that SDL_Flip() waits for vertical retrace, thus
delaying the execution of everything within the while() loop?

if, SDL_Flip() waits for the monitor refresh depends on many faktors
(OS, driver, …) if you really don’t want to wait, why not use
SDL_UpdateRect() instead? your game should run on maximum speed then.

clemens

Christian Morgner <christian.morgner at postamt.cs.uni-dortmund.de>
wrote:

Now the docs say that SDL_Flip() waits for vertical retrace, thus
delaying the execution of everything within the while() loop?

if, SDL_Flip() waits for the monitor refresh depends on many faktors
(OS, driver, …) if you really don’t want to wait, why not use
SDL_UpdateRect() instead? your game should run on maximum speed
then.

Yeah - and there will be tearing and excessive heat. :slight_smile:

Why not a decoupled fixed logic frame rate + interpolation?

SDL Programming Examples
http://olofson.net/download/pig-1.0.tar.gz

Note that you can still run input in a separate thread, to get a fixed
control latency rather than quantization at low rendering frame
rates. You don’t have to make a total mess of everything by running
all of the game logic in the “input thread”… The only time it makes
sense to do that is in networked games, where the “frame rate
induced” control latency would add to the total when running the game
logic in the rendering thread. Even then, it might actually be
sufficient to just do input and networking in a separate thread, and
keeping the local game logic in the rendering thread… The local
display will have a certain minimum latency anyway, and it does’t
matter if you do the logic when reading input events or in the
rendering loop; the input to visible feedback latency is the same
regardless.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 01 February 2005 21.05, Clemens Kirchgatterer wrote:

contracts that most techies work under in the US grant ownership of
everything they write to their employer even if they do it on their own
time on their own equipment.

[Hoping this isn’t getting way off topic…]

Employment contracts aren’t written in stone. One company I worked for had
such a clause in their contract, and because I write FOSS, I explicitly asked
that it be removed during my employment interview. The company graciously
agreed to do so (probably because they knew, without my having to say so,
that I wouldn’t work for them if they didn’t).

If you want to be a FOSS developer, you pretty much have to get an exemption
to such contract clauses.

JeffOn Tuesday 01 February 2005 08:10 am, Bob Pendleton wrote:

If you aren’t paranoid you don’t know enough about the law.

Sad, but true :-(On Tuesday 01 February 2005 08:59 am, Bob Pendleton wrote:

Note that you can still run input in a separate thread, to get a fixed
control latency rather than quantization at low rendering frame
rates. You don’t have to make a total mess of everything by running
all of the game logic in the “input thread”… The only time it makes
sense to do that is in networked games, where the “frame rate
induced” control latency would add to the total when running the game
logic in the rendering thread. Even then, it might actually be
sufficient to just do input and networking in a separate thread, and
keeping the local game logic in the rendering thread… The local
display will have a certain minimum latency anyway, and it does’t
matter if you do the logic when reading input events or in the
rendering loop; the input to visible feedback latency is the same
regardless.

Interesting. Do you put SDL_PumpEvents periodically on your rendering
thread’s code?

BrunoOn Tue, 1 Feb 2005 21:30:09 +0100, David Olofson wrote:

[…]

Interesting. Do you put SDL_PumpEvents periodically on your
rendering thread’s code?

That’s one way of doing it; checking for events, timestamping any you
get using SDL_GetTicks() to potentially provide ms accurate input
timing.

As an extra bonus, this method avoids adding your own thread sync
code, since you stay in the rendering thread, and it works on
platforms without threading support.

However, AFAIK, on most platforms SDL gathers events using an event
thread that spins at 100 Hz, so 10 ms would be the best granularity
you can get, regardless of how and when you read events via SDL.

Bypassing SDL it would be the only option to get below 10 ms, but I
strongly doubt it would make a difference, unless you’re implementing
a softsynth or something… (And then you have to use some other API
for MIDI anyway, automatically avoiding the event loop granularity
issue.)

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Wednesday 02 February 2005 00.57, Bruno Mart?nez Aguerre wrote:

[…]

Interesting. Do you put SDL_PumpEvents periodically on your
rendering thread’s code?

That’s one way of doing it; checking for events, timestamping any you
get using SDL_GetTicks() to potentially provide ms accurate input
timing.

As an extra bonus, this method avoids adding your own thread sync
code, since you stay in the rendering thread, and it works on
platforms without threading support.

However, AFAIK, on most platforms SDL gathers events using an event
thread that spins at 100 Hz, so 10 ms would be the best granularity
you can get, regardless of how and when you read events via SDL.

This isn’t true under windows, right?

Bypassing SDL it would be the only option to get below 10 ms, but I
strongly doubt it would make a difference, unless you’re implementing
a softsynth or something… (And then you have to use some other API
for MIDI anyway, automatically avoiding the event loop granularity
issue.)

Does Bob Pendleton’s event library do any better in this regard?

BrunoOn Wed, 2 Feb 2005 02:05:47 +0100, David Olofson wrote:

On Wednesday 02 February 2005 00.57, Bruno Mart?nez Aguerre wrote: