On timing and SDL (and message queues for that matter)

I’ve been working in and out with a whole bunch of projects (mostly
incomplete - quel surprise, no? :slight_smile:

Anyways I’ve had need of a high-resolution timing system in several
programs so I resurrected (and modified a wee bit) a timing system from a
MIDI player I wrote -many- years ago… But then I thought…

Does anyone else need high-resolution timing?
And do others HAVE it?

Could this be something to add to next SDL?
Or something that’s already there and I just don’t recognise it…

(I’m running a core thread at -very- high resolution time calc, then
triggering callbacks on events. Nice for single threads but… g.
Anyways for those who’ve worked with multi-instrument players (eg:
timidity, mikmod, …) such a system seems straightforeward)

hrm - so it gets me - how do people handle timing? My experiences:
AVI player - constant frame speed
(I like avifile better - I’ve abandoned this)
Quicktime - multiple frame speeds - one for each media type +
a global one (plus possible subtimes for MIDI but I
didn’t look to closely into quicktime/music yet)
(hrm - on hold until I get some other code finished…
such as stuff I’m actually paid to write :slight_smile:
MIDI - different framerates for each channel
MOD - constant lowlevel framerate, channels are multiples of core
(MIDI’s like this kind of too - but is less explicit over
core speed from what I remember)

What are other peoples’ experiences?
And how do, say, the game writers handle video render timing, action
timing, event handling and sprite (for want of a better term) handling? :slight_smile:
… and does this type of system even matter?

I’m interested in time messages over message queues in SDL
btw… controlling timing across multiple threads is a -real- pain!
(feed the ‘delay’ into thread, thread delays set time to next action)

G’day, eh? :slight_smile:
- Teunis

I’ve been working in and out with a whole bunch of projects (mostly
incomplete - quel surprise, no? :slight_smile:

hehe :wink:

Anyways I’ve had need of a high-resolution timing system in several
programs so I resurrected (and modified a wee bit) a timing system from a
MIDI player I wrote -many- years ago… But then I thought…

Does anyone else need high-resolution timing?

Yep.

And do others HAVE it?

Yep.

Could this be something to add to next SDL?

Why not?

Or something that’s already there and I just don’t recognise it…

I don’t think so; it’s platform dependent, so it can’t even be supported on
all platforms. On the platforms that do support it, you either have too poor
scheduling timing to use it for anything but checking the current time, or
you have no interrupts/scheduling capabilities with that resolution at all.

Still, a very accurate way of reading the current time can be very useful at
times.

(I’m running a core thread at -very- high resolution time calc, then
triggering callbacks on events. Nice for single threads but… g.
Anyways for those who’ve worked with multi-instrument players (eg:
timidity, mikmod, …) such a system seems straightforeward)

I don’t see why, really… What are you doing that requires more than one
thread for the music?

hrm - so it gets me - how do people handle timing? My experiences:
AVI player - constant frame speed
(I like avifile better - I’ve abandoned this)
Quicktime - multiple frame speeds - one for each media type +
a global one (plus possible subtimes for MIDI but I
didn’t look to closely into quicktime/music yet)
(hrm - on hold until I get some other code finished…
such as stuff I’m actually paid to write :slight_smile:

Both are very latency insensitive things (except for MIDI - see below) that
make use of hardware buffering for audio. That is, you can buffer up a
whole second of audio if you like, as long as you can ask the audio driver
about the current play position for accurate audio/video sync.

The most critical part is the video, where you have at best one frame of
buffering. (Triple buffering w/ h/w pageflipping - not supported by SDL
AFAIK.) OTOH, video is a lot less critical WRT timing than audio, and with
current driver architectures, you can’t do much about it anyway. Just spin in
your main loop, pumping out new frames generated according to the audio
playback postion.

MIDI - different framerates for each channel

Why do people bother with multiple “frame rates”?

MIDI songs (unlike modules) aren’t quantisized to any useful "frame rate"
anyway, so the best way is to simply set up a thread (or ISR) at about 1 kHz
or higher (if possible, or if you really need accurate timing), and then
send events off when you’re at the tick that matches the exact play time best.

Note that the average MIDI message takes 1 ms jsut to send, and many synths
take anything from another ms to tens of ms (ald machines mostly) to handle
the message. No point in going much higher than 1 kHz, that is, not even for
the fastest synths and samplers - you need a timestamped MIDI replacement for
higher accuracy.

MOD - constant lowlevel framerate, channels are multiples of core
(MIDI’s like this kind of too - but is less explicit over
core speed from what I remember)

I don’t see any resemblance between module and MIDI playback (with real MIDI
output, rather than integrated soft synth) at all. Anything playing on a
built-in soft synth can run completely independent of the audio output
timing, as long as there is sufficient buffering in between.

Basicalyl; timing is handled by interpretting the music events at the right
places in the stream - not by sending messages from a real time thread to a
real time synth engine. The latter just won’t work reliably, and will deliver
very poor timing, unless you’re using a hard RTOS. Besides, it just wastes
CPU time on context switching.

What are other peoples’ experiences?

The only problem I’ve ever had is with real time synths, being controlled
from external MIDI devices. That problem is solved by using a real OS or (to
some extent) by playing some tricks with the audio drivers.

For playback of music, audio and video files, you use buffering. :slight_smile:

And how do, say, the game writers handle video render timing,

Render as fast as you can, preferably trying to figure out when the frame is
actually going to be displayed. That is the engine time you should advance
to before rendering a scene using the current state.

action timing,

No problem if you get the above right. Just don’t base game time or frame
rate on anything but game consoles - preferably not even there, as there are
50 and 60 Hz standards even there. (While computers have no standard at all;
just expect anything from 60 through 200+ Hz, and don’t expect to be able
to set a desired refresh rate, as it’s either not going to work, or will make
the monitor freak out on some systems. I’d rather not look at a 60 Hz display
on my Eizo F980… Stroboscope! :slight_smile:

event handling and sprite (for want of a better term) handling? :slight_smile:

Well, see above - action and sprite (ie the Control System as I prefer to
call all of that) are like rendering an audio/video stream; do it in fixed
size “chunks” and interpolate if you have to, or give it a delta time
argument and do everything exact down to whatever delta time you get.

External input should preferably be timestamped in relation to the Control
System time, similarly to what good soft synths do with MIDI input. A fixed
delay (dictated by any buffering you need for continous, smooth output) is a
lot better than random latency jitter, especially in high frame rate games
with mouse control. (Try hitting a running player in Quake III with a mouse
that doesn’t track your movements smoothly…)

… and does this type of system even matter?

Yep, it makes all the difference, especially in fast games and serious
music/audio software.

I’m interested in time messages over message queues in SDL
btw… controlling timing across multiple threads is a -real- pain!
(feed the ‘delay’ into thread, thread delays set time to next action)

Why are you using threads in the first place? Unless they’re directly
interacting with multiple external interfaces (user input, audio output,
video output), and doing it at different timing resolution, threads will
usually just complicate things and make everything behave less reliably WRT
timing.

How I’d do it:

Use one thread (or ISR) for sound effects. (Timing sensitive!)

Use another thread for
	- input
	- control system
	- rendering

I’d possibly add a low priority thread for music and background noise (non
critical WRT timing, so additional buffering could be used to eliminate
drop-outs), and perhaps a high priority input thread, if possible.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Wednesday 28 February 2001 18:38, winterlion wrote:

Uhm, sorry; perhaps I should explain where I get it from. :slight_smile:

On Linux/x86, I use the RDTSC instruction, which returns the current number
of core clock cycles in 64 bits.

That works on Windows as well, but you can also use Windows multimedia
timers. The resolution of those can be set to 1 ms at best
(timeBeginPeriod(), IIRC), and they can generate messages, run callbacks (was
from IRQ context in Win16, but it’s just emulated crap under Win32 - sleep on
a thread message port instead), wake threads up and other things; just not
return the current time. Don’t expect anything like reliable 1 ms timing,
though…

In real time audio applications for Linux/lowlatency, it’s also handy to just
use the audio card as a time base. You have to sleep on it anyway, and run at
high frequencies for low latency, so why not?

If you just want to output MIDI events with correct timing, there’s a new
Win32 API with timestamps nowadays. You should definitely use that for
anything but low latency real time processing, rather than stressing the
(excuse for a) scheduler with a 1 kHz periodic thread.

Oh, OSS and ALSA also have “timestamped MIDI” in the form of sequencers. The
ALSA version even seems to be usable for mixed buffered/real time MIDI, as
you would want in a sequencer to implement solid playback + software MIDI
through with MIDI effects processing. The OSS sequencer is basically useless
for anything like that, but it still seems to be useful for some basic MIDI
playback - provided you card’s driver actually support it.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 01 March 2001 12:42, David Olofson wrote:

And do others HAVE it?

Yep.

Could this be something to add to next SDL?

Why not?
good :slight_smile:

I’ve just realized (reading this) that I was talking about several
different types of time-synchronization and time issues - I guess I should
have defined them all…

On platform dependency:
Accurate timing is platform-dependant yes.
(RDTSC for some reason doesn’t work on my computer for instance -
or didn’t before I upgraded everything :slight_smile:

framerate:
any action or set of actions that are supposed to take place
at a selected coherency are a frame. In video, framerate is how
many video frames can be updated per second. In audio it’s how
many frames (ie: packets) can be played. In networking, it’s how
many packets (ie: frames :slight_smile: can be sent… or the rate they
should be sent…
Unless there’s a better definition this is what I use.

thread:
a coherent channel of actions. May correspond to a hardware
thread but it’s a lot easier to just make it a 'cooperative’
thread unless actually necessary to make it a hardware thread.

Incidentally except where real-time is really necessary accurate current
time-keeping really isn’t all that necessary - but measuring how much time
has passed -is-. g (SDL_GetTicks works quite nicely for me :slight_smile:

on multithreaded music:
the trick is cooperative threading - not preemptive g
MOD files have 4-16 threads. XM have up to 32. MIDI has umm
up to 32 threads… (but usually one. I’ve got several MIDI
files that push this though…)

I should explain more - you’re right, planning (and passing information)
ahead of time is The Right Way. That’s how my movie/music players work…

and yah syncing is a real pain with latency-rich hardware such as audio
players… (or even videocards in old days) Or anything to do with
internet communications for that matter. (ie: videoconferencing :slight_smile:

When I last played with MIDI btw - veryvery long time ago - MIDI supported
posting messages with timestamps. I don’t know if any hardware actually
supports this though - I’ve never owned any MIDI hardware. And this is
where the ‘frame rate’ mattered - each MIDI channel had it’s own ‘frame
rate’ that messages were set in… No wait, that’s quicktime - MIDI has a
fixed time scale…

Thank you MUCHLY for the MIDI info - I do intend to work with MIDI, I’ve
just never had the resources to do so until now…

And how do, say, the game writers handle video render timing,

Render as fast as you can, preferably trying to figure out when the frame is
actually going to be displayed. That is the engine time you should advance
to before rendering a scene using the current state.

action timing,

No problem if you get the above right. Just don’t base game time or frame
rate on anything but game consoles - preferably not even there, as there are
50 and 60 Hz standards even there. (While computers have no standard at all;
just expect anything from 60 through 200+ Hz, and don’t expect to be able
to set a desired refresh rate, as it’s either not going to work, or will make
the monitor freak out on some systems. I’d rather not look at a 60 Hz display
on my Eizo F980… Stroboscope! :slight_smile:

g
yah my ‘timer’ tells how advanced each individual ‘time channel’ has gone
based on being told how much time has passed. I think I set it around
100khz for accuracy as that small measure is rarely used and any time
block could be measured within :slight_smile:
It doesn’t actually try and run that fast - it’s designed to keep track of
when things should take place… hrm - hard to explain…
I tend to run -it- buffered ahead too - that way it posts messages to the
threads that handle actual situations and they can handle the right
delays. So far SDL_Delay works quite nicely here. Most other delay
methods I checked into caused artifacts (ie delays) to threads other than
current one…

I’ve never looked too closely on handling video frame updates against the
hardware - mostly because as you say it’s way hard. And often on older
(and really new hardware at very high rates) the frame update is slower
than the graphics card… Double (or triple) buffering -is- a necessity
for high-traffic video.

event handling and sprite (for want of a better term) handling? :slight_smile:

Well, see above - action and sprite (ie the Control System as I prefer to
call all of that) are like rendering an audio/video stream; do it in fixed
size “chunks” and interpolate if you have to, or give it a delta time
argument and do everything exact down to whatever delta time you get.

hrm - and that’s what I refer to when I talk about ‘framerate’ :slight_smile:

External input should preferably be timestamped in relation to the Control
System time, similarly to what good soft synths do with MIDI input. A fixed
delay (dictated by any buffering you need for continous, smooth output) is a
lot better than random latency jitter, especially in high frame rate games
with mouse control. (Try hitting a running player in Quake III with a mouse
that doesn’t track your movements smoothly…)

g perfect :slight_smile:

[clip]

I’m interested in time messages over message queues in SDL
btw… controlling timing across multiple threads is a -real- pain!
(feed the ‘delay’ into thread, thread delays set time to next action)

Why are you using threads in the first place? Unless they’re directly
interacting with multiple external interfaces (user input, audio output,
video output), and doing it at different timing resolution, threads will
usually just complicate things and make everything behave less reliably WRT
timing.

I’m not writing a videogame at the moment. (actually I am but it’s low
priority… :slight_smile:

for a ‘browser’ (fancy database front-end) multiple threads (and tasks) is
how it runs. So it seemed simple enough to take this model to videogames
:slight_smile: (although maybe a bit ‘heavyweight’… hrm…)

I got curious how other programmers handled timing - be it tracking
timing, queued timestamped messages (that’s what I use) and handling
multiple ‘frame’ rates. (ie: audio is running 44khz but blocks are in 512
sample blocks which means framerate is ~86fps, actions on objects vary
from 1 fps to 1000+fps… video can float anywhere from 10fps up to 60fps
in videocapture, rendering COULD be up to 200+fps). I end myself with a
lot of ‘delta’ objects (dD (velocity), dV (acceleration), dA (ummm rate of
acceleration?), and such)

(hertz would be a nice translation for how I use framerate actually -
except I find it also tied to close to audio/radio)

hrm - hope this message is coherent. I’ve edited it a couple of times but
can’t be sure. (shrunk it some too - I’ve got a bad habit of rambling :slight_smile:

G’day, eh? :slight_smile:
- TeunisOn Thu, 1 Mar 2001, David Olofson wrote:

On Wednesday 28 February 2001 18:38, winterlion wrote:

Could this be something to add to next SDL?

Why not?

good :slight_smile:

I’ve just realized (reading this) that I was talking about several
different types of time-synchronization and time issues - I guess I should
have defined them all…

On platform dependency:
Accurate timing is platform-dependant yes.
(RDTSC for some reason doesn’t work on my computer for instance -
or didn’t before I upgraded everything :slight_smile:

It can’t fail if it’s there, as it’s built right into the CPU core and
instruction set. However, if you don’t have an Intel CPU (Pentium MMX or
later, IIRC), it won’t be using the same opcode, if it exists at all… :wink:

Most modern CPUs seem to have something like the Time Stamp Counter, so it
should just be a matter of identifying the CPU and plug in a suitable
function. If it’s not there (or the CPU is on an unknown type), one has to
resort to other methods, like Win32 multimedia timers or similar.

It would probably be a good idea if the API could give a hint about the
actual resolution, in cases where it would be better to use some other method
if the timers are too coarse.

framerate:
any action or set of actions that are supposed to take place
at a selected coherency are a frame. In video, framerate is how
many video frames can be updated per second. In audio it’s how
many frames (ie: packets) can be played. In networking, it’s how
many packets (ie: frames :slight_smile: can be sent… or the rate they
should be sent…
Unless there’s a better definition this is what I use.

Ok.

BTW, with low frame rates, it’s not always sufficient to use one frame as the
unit of time. Higher accuracy might be required for some things.

thread:
a coherent channel of actions. May correspond to a hardware
thread but it’s a lot easier to just make it a 'cooperative’
thread unless actually necessary to make it a hardware thread.

Also, cooperative “threading” is a lot more solid WRT timing, as you can
control exactly what’s done when, synchronously in all “threads”. For any
system that has only one frame rate to deal with, that’s a lot easier to get
right, and it also allows less buffering and thus lower latency.

Incidentally except where real-time is really necessary accurate current
time-keeping really isn’t all that necessary - but measuring how much time
has passed -is-. g (SDL_GetTicks works quite nicely for me :slight_smile:

Well, the problem is that unless you know the exact video refresh rate and
sync the video thread to it, you have to check the current time before you
start rendering each frame, in order to render everything in the exact right
position. (That’s actually the same thing as syncing to the refresh rate, as
long as you stay at full frame rate - just different ways of getting the
current time…)

If you’re going to use subpixel accurate positioning, you absolutely must
know the exact display time of each frame, or there’s just no point. The
timing jitter of +/- 0.5 frames corresponds to +/- 0.5 pixels at 1
pixel/frame speed, so you might as well drop the interpolation if you don’t
get the time right within a fraction of a frame.

Now, if you know you’re going to stay at full frame rate, you can just check
the frame rate and then use that to update the control system time every
frame. Just tweak some if it should drift from wall clock time, and make
"brutal" changes only if you should miss frames.

on multithreaded music:
the trick is cooperative threading - not preemptive g
MOD files have 4-16 threads. XM have up to 32.

Ok… :slight_smile:

MIDI has umm
up to 32 threads… (but usually one. I’ve got several MIDI
files that push this though…)

One? 32? Are you referring to tracks? I’m not sure there is a limit at 32,
but it might be - I haven’t investigated MIDI files very carefully. (I find
MIDI useless outside custom studio setups. GM/GX etc is crap - no control, no
real timbre standard, no level standard whatsoever etc etc. I’d rather use
either OPL3 FM or modules.)

Anyway, I do know that there is rudimentary channel + port support, so at
least it’s possible to deal with more than 16 channels. However, I’d never
use that in a GM or other standard file, as it’s impossible to tell where the
ports are routed. (Playing such a file on my setup, only the first port will
play GM; the other one will hit a custom non-standard JV-1080 performance
patch…)

I should explain more - you’re right, planning (and passing information)
ahead of time is The Right Way. That’s how my movie/music players work…

and yah syncing is a real pain with latency-rich hardware such as audio
players… (or even videocards in old days) Or anything to do with
internet communications for that matter. (ie: videoconferencing :slight_smile:

Internet? Latency!? hehe

When I last played with MIDI btw - veryvery long time ago - MIDI supported
posting messages with timestamps.

Thinking about the MPU-401? (The real one, that is; not the crippled
copies.) Yeah, it had hardware timing support, but that’s pretty much the
only interface that ever had that, short of some of the latest high end
interfaces.

Some fools concluded that “computers are now sufficiently fast to do the MIDI
timing in software, so we don’t need to mess with MPU-401 style h/w timing.”

Well, they were right, and software based solutions are more flexible.
However, they didn’t realize that the days when you could hog the CPU without
restrictions to do hard real time stuff were soon to be over…

I don’t know if any hardware actually
supports this though - I’ve never owned any MIDI hardware.

Some of the external multichannel interfaces do. Not sure if standard driver
APIs support it, though. Win32 does have a new API with timestamps, but I
don’t know if everything is actually implemented all the way down to the
drivers.

And this is
where the ‘frame rate’ mattered - each MIDI channel had it’s own ‘frame
rate’ that messages were set in… No wait, that’s quicktime - MIDI has a
fixed time scale…

Thank you MUCHLY for the MIDI info - I do intend to work with MIDI, I’ve
just never had the resources to do so until now…

Just keep it in the studio, will you? :wink: That’s where it works.

Right, you could add an E-mu Proteus 2000 or a Roland JV-2080 to the system
spec for music playback, but other than that, you’re either limited to no
control at all, or to SoundFonts. (And SoundFonts don’t play correctly on all
cards either; actually not at all on most cards…)

What I’d like to do for game music, as an alternative to the usual CD track
or mp3 solution, is to throw in a soft synth controlled by something similar
to MIDI files, instead of relying on any hardware dependent (non-)standard.
Beats modules hands down in all respects, unless you just can’t create with
anything but a classic tracker UI. (I almost forgot how to do that, so I
don’t care much…)

And how do, say, the game writers handle video render timing,

Render as fast as you can, preferably trying to figure out when the frame
is actually going to be displayed. That is the engine time you should
advance to before rendering a scene using the current state.

action timing,

No problem if you get the above right. Just don’t base game time or
frame rate on anything but game consoles - preferably not even there, as
there are 50 and 60 Hz standards even there. (While computers have no
standard at all; just expect anything from 60 through 200+ Hz, and
don’t expect to be able to set a desired refresh rate, as it’s either
not going to work, or will make the monitor freak out on some systems.
I’d rather not look at a 60 Hz display on my Eizo F980… Stroboscope!
:slight_smile:

g
yah my ‘timer’ tells how advanced each individual ‘time channel’ has gone
based on being told how much time has passed. I think I set it around
100khz for accuracy as that small measure is rarely used and any time
block could be measured within :slight_smile:
It doesn’t actually try and run that fast - it’s designed to keep track of
when things should take place… hrm - hard to explain…
I tend to run -it- buffered ahead too - that way it posts messages to the
threads that handle actual situations and they can handle the right
delays. So far SDL_Delay works quite nicely here. Most other delay
methods I checked into caused artifacts (ie delays) to threads other than
current one…

I’ve never looked too closely on handling video frame updates against the
hardware - mostly because as you say it’s way hard. And often on older
(and really new hardware at very high rates) the frame update is slower
than the graphics card…

Actually, in that case you should lower the refresh rate, or upgrade the
machine. (Unless you can accept the unsmooth animation, that is.)

The frame rate cannot be dropped below the refresh rate without visible
artifacts, no matter how high the latter is! You’ll always get those ghost
image effects on fast moving objects, although they do get slightly less
visible at extremely high refresh rates, like 150 Hz or higer, depending on
the monitor…

(Note that the objects will mode less between the frames at higher frame
rates, which also reduces the ghosting effect slightly - maybe enough to
produce an acceptable result in some cases. I wouldn’t bet on it though; try
it with a good 21" monitor, and you’ll be surprized how every minor artifact
becomes very visible…)

The reason for this appears to be similar to the reason why you have to use
oversampling/linear interpolation and/or filtering in audio DACs to avoid
audible artifacts - you can’t just pump a staircase shaped signal out and
expect a clean sound.

In both cases, we’re dealing with frame rates that are way beyond the
frequency range of the eye and the ear respectively, but still, we have to do
things right to avoido visible/audible artifacts.

Double (or triple) buffering -is- a necessity
for high-traffic video.

Yep.

[…]

I’m interested in time messages over message queues in SDL
btw… controlling timing across multiple threads is a -real- pain!
(feed the ‘delay’ into thread, thread delays set time to next action)

Why are you using threads in the first place? Unless they’re directly
interacting with multiple external interfaces (user input, audio output,
video output), and doing it at different timing resolution, threads will
usually just complicate things and make everything behave less reliably
WRT timing.

I’m not writing a videogame at the moment. (actually I am but it’s low
priority… :slight_smile:

Personally, I’m thinking about giving up talking about priority - as you
might expect, I’m a hacker since the age of 10, and hacking always (well,
almost :wink: has top priority! The problem is just that I have too many
projects going on… heh

for a ‘browser’ (fancy database front-end) multiple threads (and tasks) is
how it runs. So it seemed simple enough to take this model to videogames

:slight_smile: (although maybe a bit ‘heavyweight’… hrm…)

Yeah… It would be possible to do it that way with an OS with decent real
time scheduling and/or enough buffering + timestamping, but I doubt it’s
worth the efforth if you want real smooth animation. It’s just too messy to
keep things in exact sync and getting the interpolation right.

I got curious how other programmers handled timing - be it tracking
timing, queued timestamped messages (that’s what I use) and handling
multiple ‘frame’ rates. (ie: audio is running 44khz but blocks are in 512
sample blocks which means framerate is ~86fps, actions on objects vary
from 1 fps to 1000+fps… video can float anywhere from 10fps up to 60fps
in videocapture, rendering COULD be up to 200+fps). I end myself with a
lot of ‘delta’ objects (dD (velocity), dV (acceleration), dA (ummm rate of
acceleration?), and such)

In some very “pixel oriented” games, it seems hard to get coherent behavior
regardless of video frame rate with “theoretically correct” timing (pos, v, a
etc and dt).

In the port of Project Spitfire (which was locked at 60 Hz, depending on a
custom VGA mode), I’m basically reconstructing the original control system
(which did use pos, speed and acc), but I keep it running at 60 Hz, as it
did in the original game. In order to get smooth animation, I’ve added an
linear interpolation filter to the “point” object, which means that I can
advance the control system with sub frame accuracy, and extract all
coordinates with sub pixel accuracy.

In the current version, I’m just rounding to the nearest integer pixel
position, but I’ll use the full resolution when I throw the OpenGL rasterizer
in. :slight_smile:

(hertz would be a nice translation for how I use framerate actually -
except I find it also tied to close to audio/radio)

Hz would be the unit of framerate, I think… :slight_smile:

hrm - hope this message is coherent. I’ve edited it a couple of times but
can’t be sure. (shrunk it some too - I’ve got a bad habit of rambling :slight_smile:

So do I…! :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 02 March 2001 13:08, winterlion wrote:

On Thu, 1 Mar 2001, David Olofson wrote:

On Wednesday 28 February 2001 18:38, winterlion wrote:

And how do, say, the game writers handle video render timing,

Render as fast as you can, preferably trying to figure out when the
frame is actually going to be displayed. That is the engine time you
should advance to before rendering a scene using the current state.

That’s one way to do it, and the method most, if not all, 3D games use. You
can also try and put in a delay each frame in order to try and hold a
constant frame-rate.

The frame rate cannot be dropped below the refresh rate without visible
artifacts, no matter how high the latter is! You’ll always get those ghost
image effects on fast moving objects, although they do get slightly less
visible at extremely high refresh rates, like 150 Hz or higer, depending on
the monitor…

Or multiples of the refresh period. If the refresh rate is 75 hertz, and you
can’t draw a frame that fast, you can draw a frame over 2 refreshes, page
flipping at the right time. Or are you talking about a non page flipped
situation here?

I got curious how other programmers handled timing - be it tracking
timing, queued timestamped messages (that’s what I use) and handling
multiple ‘frame’ rates. (ie: audio is running 44khz but blocks are in
512 sample blocks which means framerate is ~86fps, actions on objects
vary from 1 fps to 1000+fps… video can float anywhere from 10fps up to
60fps in videocapture, rendering COULD be up to 200+fps). I end myself
with a lot of ‘delta’ objects (dD (velocity), dV (acceleration), dA (ummm
rate of acceleration?), and such)

I’m working on a fairly simple game right now, 2d based, and in linux.
Apparently linux only supports timing resolutions of 10ms. On top of that,
it’s a multithreaded environment, with generally quite a bit of threads
running all the time. So it makes the frame rate problem rather interesting.
In order to keep movement simple, I’m trying to shoot for a constant
framerate of 60 fps. What I did was create a self-adjusting delay class that
tries to delay the correct amount of time each frame in order to produce 60
fps. It’s a little more complicated than that, though, of course. With a
multithreaded environment, you could get ‘timing spikes’ that screw up one
frame of timing, so you have to do averaging over several frames. It seems
to work pretty good I think. We’ll see once I put it out for the public to
play with, though.

No, I’m talking about any kind of double buffered displays - anything else
would be totally unacceptable for anything unless you can’t render from top
to bottom in accurate raster sync.

The problem with skipping refreshes is that the CRT will “flash” the same
image more than once, giving the eye a very strong impression of a still
image. The result is that you see ghost images of objects that move quickly -
and this happens even at very high refresh rates.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Monday 05 March 2001 22:05, Jason Hoffoss wrote:

And how do, say, the game writers handle video render timing,

Render as fast as you can, preferably trying to figure out when the
frame is actually going to be displayed. That is the engine time
you should advance to before rendering a scene using the current
state.

That’s one way to do it, and the method most, if not all, 3D games use.
You can also try and put in a delay each frame in order to try and hold a
constant frame-rate.

The frame rate cannot be dropped below the refresh rate without visible
artifacts, no matter how high the latter is! You’ll always get those
ghost image effects on fast moving objects, although they do get slightly
less visible at extremely high refresh rates, like 150 Hz or higer,
depending on the monitor…

Or multiples of the refresh period. If the refresh rate is 75 hertz, and
you can’t draw a frame that fast, you can draw a frame over 2 refreshes,
page flipping at the right time. Or are you talking about a non page
flipped situation here?

Apparently linux only supports timing resolutions of 10ms.

not quite right — linux on most platforms has a timer granularity
(for timed sleeps) of 10 ms, but on Alpha it’s 1/1024 s. It can be
changed by modifying HZ and recompiling the kernel, and doing so
reportedly gives better latency in a variety of circumstances (though you
may need to recompile some userland apps/libs if they have HZ compiled-in)