Proper game movement w/o limiting fps?

Guys, i’m struggling to find a way to code proper game movement (i define
this as within the event loop you have tick intervals, and a delay if the
tick interval has not expired), w/o limiting my fps? If i put all the draw
functions inside the event loop, i’m limiting them with the call to the
delay(). I’m using opengl as well… the reason i say this is because I
have tried threading. The main thread would initialise opengl and the
window, it would also perform the drawing. From this thread i spawn an
event loop thread. This in theory should work, because the event loop
thread can block on the call to delay() while the main thread keeps
cycling through w/o blocking.

Problem is i can’t get this to work, it seems opengl is far from thread
aware and i end up in terrible input response at best, seg faults and
locking X at worst.

The question I ask, this is a common problem, what is the right
solution?

tia, Davin.

I specify the movement in my programs in pixels per 256 milliseconds.

This requires calculating the time it takes to calculate a frame. At the
top of your main loop, put something like:

Uint32 ticks = 0, lastTicks, elapsed;

lastTicks = ticks;
ticks = SDL_GetTicks();
elapsed = ticks - lastTicks;

Then when calculating movement, use the elapsed variable:

double x;

x += ((double) elapsed / 256.0) * speed;

Here, speed is an integer that specifies movement in pixels per 256 ms.
You can use some other resolution besides 256 if you find that you need
to.

I hope this applies to what you are doing.–
dwl 4t wolsi d0t c0m

From: dgibb@optusnet.com.au (davin)
To: sdl at libsdl.org
Subject: [SDL] proper game movement w/o limiting fps?
Reply-To: sdl at libsdl.org

Guys, i’m struggling to find a way to code proper game movement (i define
this as within the event loop you have tick intervals, and a delay if the
tick interval has not expired), w/o limiting my fps? If i put all the draw
functions inside the event loop, i’m limiting them with the call to the
delay(). I’m using opengl as well… the reason i say this is because I
have tried threading. The main thread would initialise opengl and the
window, it would also perform the drawing. From this thread i spawn an
event loop thread. This in theory should work, because the event loop
thread can block on the call to delay() while the main thread keeps
cycling through w/o blocking.

Problem is i can’t get this to work, it seems opengl is far from thread
aware and i end up in terrible input response at best, seg faults and
locking X at worst.

The question I ask, this is a common problem, what is the right
solution?

tia, Davin.

davin said:

Guys, i’m struggling to find a way to code proper game movement (i define
this as within the event loop you have tick intervals, and a delay if the
tick interval has not expired), w/o limiting my fps? If i put all the draw
functions inside the event loop, i’m limiting them with the call to the
delay(). I’m using opengl as well… the reason i say this is because I
have tried threading. The main thread would initialise opengl and the
window, it would also perform the drawing. From this thread i spawn an
event loop thread. This in theory should work, because the event loop
thread can block on the call to delay() while the main thread keeps
cycling through w/o blocking.

Problem is i can’t get this to work, it seems opengl is far from thread
aware and i end up in terrible input response at best, seg faults and
locking X at worst.

as long as all OGL stuff is contained in its own thread, it’s all good

The question I ask, this is a common problem, what is the right
solution?

http://lgdc.sunsite.dk/articles/22.html

tia, Davin.

    -Erik <@Erik_Greenwald> [http://math.smsu.edu/~erik]

The opinions expressed by me are not necessarily opinions. In all probability,
they are random rambling, and to be ignored. Failure to ignore may result in
severe boredom or confusion. Shake well before opening. Keep Refrigerated.

Guys, i’m struggling to find a way to code proper game movement (i
define
this as within the event loop you have tick intervals, and a delay if
the
tick interval has not expired), w/o limiting my fps? If i put all the
draw
functions inside the event loop, i’m limiting them with the call to the
delay(). I’m using opengl as well… the reason i say this is because I
have tried threading. The main thread would initialise opengl and the
window, it would also perform the drawing. From this thread i spawn an
event loop thread. This in theory should work, because the event loop
thread can block on the call to delay() while the main thread keeps
cycling through w/o blocking.

My preferred technique is to keep a running average of how long it takes
to draw each frame, and scale all movement increments to meet the
desired speed. For instance, if it takes 16 milliseconds to draw a frame
(framerate of 60fps), and you want to base your timing calculations on
30fps (33 milliseconds per frame), you’ll need to scale all movement by
0.5. I prefer to do this with a global time_scale variable.

Random tips:
-Use a windowed average to avoid overreacting to minor fluctuations in
framerate.
-Don’t use an infinitely large windowed average. :slight_smile:
-Only scale actual, final increments to positions; do not scale
acceleration values or your stored velocities.

-JohnOn Friday, March 1, 2002, at 05:34 PM, davin wrote:


John R. Hall - KG4RUO - Resident, Sol System
CS Student, Georgia Tech - Author, Programming Linux Games

[…]

-Only scale actual, final increments to positions; do not scale
acceleration values or your stored velocities.

That doesn’t give you the exact same results regardless of frame rate. In
fact, you should “sort of” scale the accelerations and velocities - or
rather, you should figure out versions of all your movement equations
that can calculate the “exact” result for any delta time.

However, that’s just the easy part. Next, you’ll have to implement a
collision detection system that works with continous time. That is, for
every collision, you have to figure out exactly when the collision
occured, then calculate exactly where the objects involved would be at
that time, and finally call your collision event handlers. If your event
handlers look at any objects but the ones involved in the collision,
you’ll have to calculate their exact positions as well - which may
recursively trigger more collisions…

Sounds like great fun, doesn’t it? :wink:

Why bother?

Well, just try to play back some Doom demos on a different Doom port than
the one they were recorded on. The slightest difference in how
collision detection is handled can screw up everyhting totally. Not that
you would notice anything when playing the game, but the slight
differences between the ports are enough that demos don’t port. (This is
because Doom demos store only player movements, and rely on the world and
the monster “AI” to be concistent.)

Now, you don’t have to go through all this just to make use of the full
frame rate. (At this point, I’m getting that broken record feeling
again… Then again, it seems to be complex enough that I really need
to figure out a good way of explaining it.)

1. Simply fix your control system frame rate at some sensible
   rate; say 50 Hz, or if you're having tunelling problems
   (missed collisions), some higher rate.

2. Then, once per rendered frame, check the current "control
   system time", and loop, calling the control system "frame"
   callback until the CS time matches real world time. (This
   will occasionally mean 0 calls, if the video frame rate is
   higher than the CS frame rate!)

3. Always keep the last two versions of the coordinates for
   each "object". Note that any scrolling backgrounds and
   stuff will also have to be "objects"!

4. When rendering each frame, calculate the actual graphic
   coordinates for every object by interpolating between the
   last two coordinates for it, using the fractional part of
   the CS time corresponding to "now" as the "balance".

Don’t know if there’s a real name for this method, but I refer to it as
"Motion Interpolation" in Kobo Deluxe, and it does a great job. Dead
simple to use from the application POV (it’s all in the engine), pretty
fast and scales perfectly to any frame rate.

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Sunday 03 March 2002 22:53, John Hall wrote:

John Hall wrote:

My preferred technique is to keep a running average of how long it takes
to draw each frame, and scale all movement increments to meet the
desired speed. For instance, if it takes 16 milliseconds to draw a frame
(framerate of 60fps), and you want to base your timing calculations on
30fps (33 milliseconds per frame), you’ll need to scale all movement by
0.5. I prefer to do this with a global time_scale variable.

this works really well. i’ve noticed in some of my games where things
happen randomly (like enemies firing) i’ve needed to multiply the random
"chance" that enemies will fire. otherwise when the game is slow they
shoot less (since the random call happens loss often)

just multiplying the random chance for something to happen by the
timescale seems to keep the recurring random results the same when the
game is running fast or slow.

A simple way to achieve the same effect is to treat all actions in a
game as events that are stored in a priortiy queue ordered by
"gameTime." The main loop of the game keeps processing events until it
catches up with real time and then displays a frame. So, a machine gun
event would cause a bullet to be fired and queue another event at the
time in the future that the next bullet should be fired, and so on. A
random shoot/don’t shoot, decision is scheduled and then tested at the
correct time no matter what the frame rate. Locations are computed based
on the current gameTime.

You DO NOT have to execute an event at the actual real time of the
event, you can process them in a batch when you have the time. You will
usually process them after the next frame is displayed which meachs that
are usually processed after the actual time of the event has passed.

This approach has the nice effect of keeping action in synch with the
frame rate not matter what the rate is or how it varies from frame to
frame. It also lets you drive long term strategy using events set at any
time in the future, be it a 1/10 a millisecond from now, or 10 days from
now. I’ve even seen places where it was useful to insert events that
occured in the past.

For those worried about efficiency, priority queues have at worst an
O(Log2(N)) insertion time. So you can handle huge queues in very little
time. For more information on this subject look up discrete event
simulation.

	Bob Pendleton.

Pete Shinners wrote:> John Hall wrote:

My preferred technique is to keep a running average of how long it
takes to draw each frame, and scale all movement increments to meet
the desired speed. For instance, if it takes 16 milliseconds to draw a
frame (framerate of 60fps), and you want to base your timing
calculations on 30fps (33 milliseconds per frame), you’ll need to
scale all movement by 0.5. I prefer to do this with a global
time_scale variable.

this works really well. i’ve noticed in some of my games where things
happen randomly (like enemies firing) i’ve needed to multiply the random
"chance" that enemies will fire. otherwise when the game is slow they
shoot less (since the random call happens loss often)

just multiplying the random chance for something to happen by the
timescale seems to keep the recurring random results the same when the
game is running fast or slow.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-----------------------------------+

  • Bob Pendleton is seeking contract +
  • and consulting work. Find out more +
  • at http://www.jump.net/~bobp +
    ±-----------------------------------+

John Hall wrote:

My preferred technique is to keep a running average of how long it
takes
to draw each frame, and scale all movement increments to meet the
desired speed. For instance, if it takes 16 milliseconds to draw a
frame
(framerate of 60fps), and you want to base your timing calculations on
30fps (33 milliseconds per frame), you’ll need to scale all movement
by
0.5. I prefer to do this with a global time_scale variable.

Also, you will need to scale all timing. For instance, if your player
holds down left mouse button to fire a round of a minigun, the intervals
between each shot will need to be scaled.
Another way of doing it is running the renderer in a separate thread and
the game logic and UI in another thread with higher priority. The
renderer then reads game variables off the game logic thread and uses
them to visualise the game.
The same techniques apply to sound, of course.

Regards
Anders

Another way of doing it is running the renderer in a separate thread and
the game logic and UI in another thread with higher priority.

How is that done using SDL.
Is there a way to specify thread priorities??

No, but if you have a real OS, the scheduler’s dynamic thread priority
adjustment logic will realize that the game logic thread will only wake
up for a short moment “every now and then”, and then nicely go back to
sleep. Meanwhile, the rendering thread will be detected as a CPU hog, as
it uses loads of CPU power, and maybe even refuses to sleep volontarily.
(That would be the case if you don’t have retrace sync, and don’t want to
risk dropping frames by sleeping/yielding between frames.)

The net result is that the game logic thread will automatically be
given higher priority than the rendering thread. Unless you have a broken
OS, there’s no reason to tell the scheduler what to do here.

The only case where there’s any real point in setting thread priorities
explicitly is in complex systems, whith closely interacting threads with
quickly varying CPU utilization. In such cases, you may boost
performance if you give the scheduler “hints” about which threads are
least likely to block more or less immediately if woken up. (The
scheduler can’t know about things that haven’t happened yet, and as the
CPU utilization in theads change a lot, the statistics normally used
canbecome useless or even harmful.)

Oh, and there’s another exception as well: Real time threads. However,
there are very few operating systems that implement them in a useful form
(one of them is Linux/lowlatency), so it’s not too relevant as of now.
I’m quite sure it will be in the future, but right now, you don’t
really get what you ask for on your average OS - if there’s even a way to
ask for it! Besides, if you can get real time priority at all, it
usually requires a high privilege level, for obvious reasons. (You don’t
want your average untrusted application to be able to totally freeze your
system, do you?)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 06 March 2002 22:33, Johan Hahn wrote:

Another way of doing it is running the renderer in a separate thread
and the game logic and UI in another thread with higher priority.

How is that done using SDL.
Is there a way to specify thread priorities??