Framerate -- increasing accuracy for compensation

Some programs such as Tux Racer dynamically increase the resolution or
detail in order to make up for the difference in time. It uses gettimeofday
on linux platforms and SDL_GetTicks() on Windows platforms (if I recall
correctly)… I would bet that it would appear choppier under windows for
this reason…

I’m working with SDL primarily on the windows platform. My raycasting engine
uses SDL_GetTicks() to measure the difference between each time-slice and
adjust how fast to move or rotate accordingly, and I dont think its perfect
(it seems a bit choppy). For example if I measure the time time it takes to
render a complete frame by subtracting newtick - oldtick, that if its only
at a 1 ms resolution and its running at 32fps (for example), thats 1000 / 36
or 27.77777 millisecond difference, of course it would only measure 27. So
at a complete frame, thats 36*27 or 972, with a difference of 22
milliseconds… which is a 2% accuracy loss. At 48 FPS, it would only
return 20 ms, so 20 * 48 = 960, 1000-960=40, and 40/1000 = 4% accuracy loss.

Right now my raycasting engine (software rasterizer) is a little slow and
needs to be rewritten, but trying to compensate for different rendering
speeds might prove challenging with an unpredictable accuracy loss after I
do this. I mean, I could sample the framerate every other frame, but I think
thats still gambling on the same problem as before… to get a truely smooth
rendering I’d need to compensate based on what it, the fps is at, now. Then
again it might not matter once I speed this sucker up.

Anyone run into this issue?

Matt Johnson

Anyone run into this issue?

You describe exactly the problem I had :slight_smile: Feel free to try my solution.

I’m working with SDL primarily on the windows platform. My raycasting engine
uses SDL_GetTicks() to measure the difference between each time-slice and
adjust how fast to move or rotate accordingly, and I dont think its perfect
(it seems a bit choppy). For example if I measure the time time it takes to
render a complete frame by subtracting newtick - oldtick, that if its only
at a 1 ms resolution and its running at 32fps (for example), thats 1000 / 36
or 27.77777 millisecond difference, of course it would only measure 27. So
at a complete frame, thats 36*27 or 972, with a difference of 22
milliseconds… which is a 2% accuracy loss. At 48 FPS, it would only
return 20 ms, so 20 * 48 = 960, 1000-960=40, and 40/1000 = 4% accuracy loss.

Right now my raycasting engine (software rasterizer) is a little slow and
needs to be rewritten, but trying to compensate for different rendering
speeds might prove challenging with an unpredictable accuracy loss after I
do this. I mean, I could sample the framerate every other frame, but I think
thats still gambling on the same problem as before… to get a truely smooth
rendering I’d need to compensate based on what it, the fps is at, now. Then
again it might not matter once I speed this sucker up.

Matt Johnson

– Timo Suoranta – @Timo_K_Suoranta

Some programs such as Tux Racer dynamically increase the resolution or
detail in order to make up for the difference in time. It uses gettimeofday
on linux platforms and SDL_GetTicks() on Windows platforms (if I recall
correctly)

Okay everyone, here’s the skinny:

SDL_GetTicks() on UNIX:
Uses gettimeofday()
Has a theoretical resolution of 1 microsecond
Has a real resolution of about 1 millisecond

SDL_GetTicks() on Windows:
Uses timeGetTime() (or GetTickCount() on WinCE)
Unknown resolution, but I think it’s about 1 millisecond for
timeGetTime() and about 5 milliseconds for GetTickCount()

SDL_GetTicks() on BeOS:
Uses system_time()
Has a theoretical resolution of 1 microsecond
Has a real resolution of about 1 millisecond (I think)

SDL_GetTicks() on MacOS:
Uses Matt Slot’s FastMilliseconds() package with CodeWarrior
and uses Microseconds() with Apple’s MPW build environment
FastMilliseconds() has a resolution of about 1 millisecond, I think
Microseconds has a resolution of about 5 milliseconds, I think

So, you’re probably safe assuming near 1 ms resolution for SDL_GetTicks()
on all platforms. This will increase in the future as real-time platforms
become more common and some experimental techniques are proven.

See ya,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

Yeah I’ll look into at the timing code at glElite and see if that helps. I
especially like that code where it checks to see if the FPS is greater (!)
than infinity. lol.

In case anyone is curious, gl tron does this “wrong” also…but there is no
other way I guess, you cant sample it every second because the adjustments
have to be made on the fly… I need to check out TuxRacer again. Maybe the
loss between frames is neglible and the adjustments I made to the
movement/rotation is where I’m losing the accuracy…

Anyway for the curious this is how gltron does it.

int updateTime() {
game2->time.lastFrame = game2->time.current;
game2->time.current = SystemGetElapsedTime() - game2->time.offset;
game2->time.dt = game2->time.current - game2->time.lastFrame;
/* fprintf(stderr, “dt: %d\n”, game2->time.dt); */
return game2->time.dt;

}

if
time.current was 50,90,110,140,190 (for 5 frames)
time.dt would be … 40, 20, 30, 50

Which btw what happens when um time.current > 2^sizeof(time.current)?
hehe…remember MS logging bug where after 49 days it would mess up? It was
because it couldnt fit the # of milliseconds in 32-bit integer. See its
problems like these that keep my up at night.

This could occur though because note SystemGetElapsedTime() might not start
at 0, it might start near 2^32… meaning wrapping error, time.current would
be giving you negative #'s…

void camMove() {
camAngle += CAM_SPEED * game2->time.dt / 100;
while(camAngle > 360) camAngle -= 360;
}> > Anyone run into this issue?

You describe exactly the problem I had :slight_smile: Feel free to try my solution.

Anyway for the curious this is how gltron does it.

int updateTime() {
game2->time.lastFrame = game2->time.current;
game2->time.current = SystemGetElapsedTime() - game2->time.offset;
game2->time.dt = game2->time.current - game2->time.lastFrame;
/* fprintf(stderr, “dt: %d\n”, game2->time.dt); */
return game2->time.dt;

}

Which btw what happens when um time.current > 2^sizeof(time.current)?
hehe…remember MS logging bug where after 49 days it would mess up? It was
because it couldnt fit the # of milliseconds in 32-bit integer.

if time.current is an int, lets say, then sizeof(time.current) = 4. 2^4 =
16. I think what you meant was 256^sizeof(time.current). Anyway, it all
depends on your application. For a game, running for 49+ days in one session
generally isn’t a problem. I personally have never played a game for 49+
straight in one session, and I’ve played a lot of games. :slight_smile:

This could occur though because note SystemGetElapsedTime() might not start
at 0, it might start near 2^32… meaning wrapping error, time.current would
be giving you negative #'s…

Yes, but it doesn’t matter. Lets say it did wrap (assume 16 bit integers for
simplicity):

lastFrame = 32767 (0x7fff)
current = -32760 (0x8008)
dt = current - lastFrame:
dt = -32760 - 32767 = -65527

However, -65527 is outside the range of a 16 bit integer, so it would get
wrapped. -65527 is 0x0009 in hex, so we would actually end up with a value
of 9. So even though our timer wrapped, our difference calculation also
wrapped with it, giving us the correct number anyway.

See its problems like these that keep my up at night.

You can go get some rest now then. :)On Thursday 05 April 2001 11:50, you wrote:

if time.current is an int, lets say, then sizeof(time.current) = 4. 2^4 =
16. I think what you meant was 256^sizeof(time.current). Anyway, it all
depends on your application. For a game, running for 49+ days in one session
generally isn’t a problem. I personally have never played a game for 49+
straight in one session, and I’ve played a lot of games. :slight_smile:

Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.

See ya,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

And someone might want to use a game as a long-term stress test… :wink:

Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around the
clock for days or weeks…

You never know how people are going to use you software, so it’s generally a
good idea to handle this kind of stuff nicely.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Thursday 05 April 2001 23:11, Sam Lantinga wrote:

if time.current is an int, lets say, then sizeof(time.current) = 4. 2^4
= 16. I think what you meant was 256^sizeof(time.current). Anyway, it
all depends on your application. For a game, running for 49+ days in one
session generally isn’t a problem. I personally have never played a game
for 49+ straight in one session, and I’ve played a lot of games. :slight_smile:

Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.

You are both correct, however I doubt you’d need millisecond precesion over
time spans that long. If you want to timestamp events, such as when a game
started, then to the nearest second is probably fine. For that, you probably
wouldn’t be using ticks. Usually you use ticks to track time between 2
events, and these 2 events usually aren’t going to be all that far apart.

You do bring up an interesting point, though. SDL only has one mechanism for
tracking time, and that is fixed at milliseconds. It might be better if you
could specify the units you are interested in tracking instead, and that you
could expect the full range to be used. For example, if I want my results in
seconds, then it wouldn’t calculate it in milliseconds and just multiply by
1000, because I would lose the upper value range due to the wrap around
occuring sooner.

Thinking about it, you probably only need about 3 ranges: seconds,
milliseconds, or maximum precision (CPU ticks for example). The maximum
precision options would have to be platform specific, and would be in some
arbitrary units that you may or may not be able to convert to some variation
of seconds. Just an idea.On Thursday 05 April 2001 15:09, you wrote:

On Thursday 05 April 2001 23:11, Sam Lantinga wrote:

if time.current is an int, lets say, then sizeof(time.current) = 4.
2^4 = 16. I think what you meant was 256^sizeof(time.current).
Anyway, it all depends on your application. For a game, running for
49+ days in one session generally isn’t a problem. I personally have
never played a game for 49+ straight in one session, and I’ve played a
lot of games. :slight_smile:

Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.

And someone might want to use a game as a long-term stress test… :wink:

Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around
the clock for days or weeks…

You never know how people are going to use you software, so it’s generally
a good idea to handle this kind of stuff nicely.

Which btw what happens when um time.current > 2^sizeof(time.current)?
hehe…remember MS logging bug where after 49 days it would mess up? It was
because it couldnt fit the # of milliseconds in 32-bit integer. See its
problems like these that keep my up at night.

This could occur though because note SystemGetElapsedTime() might not start
at 0, it might start near 2^32… meaning wrapping error, time.current would
be giving you negative #'s…

Wraparound is solvable for elapsed timers, even at millisecond resolution (which
wraps at 49 days or so). The only trick is that you compare elapsed times, and
not whether a given time is greater or less than another time.

Here’s the routine right from my network code:

long CompareTimes(unsigned long time1, unsigned long time2)
{
return((signed) (time1 - time2));
}

This routine returns:

<0 : event #1 came before event #2
=0 : event #1 and #2 were simultaneous

0 : event #2 came before event #1

Yep, subtraction. It works as long as the 2 events are within 24 days of each
other.–
/* Matt Slot * Bitwise Operator * http://www.ambrosiasw.com/~fprefect/ *

  • "Did I do something wrong today or has the world always been like this *
  • and I’ve been too wrapped up in myself to notice?" - Arthur Dent/H2G2 */

If you were to do something like this, then you could take liberties with your
own code (not SDL, but the code of your game/whatever) and do things specific
to the hardware you are using.

In cases like this, portability of your game/whatever would be the least of
your concerns :wink:

(I’m not dissing the framerate issue… just thought I’d throw in an
observation).On Thu, 05 Apr 2001, you wrote:

Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around the
clock for days or weeks…


Sam “Criswell” Hart <@Sam_Hart> AIM, Yahoo!:
Homepage: < http://www.geekcomix.com/snh/ >
PGP Info: < http://www.geekcomix.com/snh/contact/ >
Advogato: < http://advogato.org/person/criswell/ >

if time.current is an int, lets say, then sizeof(time.current) = 4.
2^4 = 16. I think what you meant was 256^sizeof(time.current).
Anyway, it all depends on your application. For a game, running for
49+ days in one session generally isn’t a problem. I personally have
never played a game for 49+ straight in one session, and I’ve played
a lot of games. :slight_smile:

Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.

And someone might want to use a game as a long-term stress test… :wink:

Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around
the clock for days or weeks…

You never know how people are going to use you software, so it’s
generally a good idea to handle this kind of stuff nicely.

You are both correct, however I doubt you’d need millisecond precesion over
time spans that long.

Right, but that’s not it; the problem is that if you don’t handle wrapping
correctly, your program might freeze for a good while, or even crash.

(BTW, forget about true ms precision in relation to wall clock time over more
than a few hours on any normal hardware; the standard oscillators just aren’t
up to that kind of stability. :slight_smile:

If you want to timestamp events, such as when a game
started, then to the nearest second is probably fine. For that, you
probably wouldn’t be using ticks. Usually you use ticks to track time
between 2 events, and these 2 events usually aren’t going to be all that
far apart.

Exactly - this is why wrapping shouldn’t be a problem. Just make sure you’re
using the same data type throughout the timing calculations, and wrapping
will be handled automatically thanks to the nature of the ALUs of pretty much
all CPUs used these days.

You do bring up an interesting point, though. SDL only has one mechanism
for tracking time, and that is fixed at milliseconds. It might be better
if you could specify the units you are interested in tracking instead, and
that you could expect the full range to be used. For example, if I want my
results in seconds, then it wouldn’t calculate it in milliseconds and just
multiply by 1000, because I would lose the upper value range due to the
wrap around occuring sooner.

Sounds nice, but it doesn’t come for free…

Thinking about it, you probably only need about 3 ranges: seconds,
milliseconds, or maximum precision (CPU ticks for example).

Seconds and ms sounds ok, but I’d prefer a fixed resolution (?s or ns) for
the last one - dealing with exposed platform dependent stuff all over the
code isn’t very nice. (However, you should be able to get info on what kind
of accuracy you can expect, at run time.)

The maximum
precision options would have to be platform specific, and would be in some
arbitrary units that you may or may not be able to convert to some
variation of seconds. Just an idea.

Indeed, I’m counting frame duration in raw CPU cycles in my retrace sync
hacks, but it’s hardly what I’d like to see in a serious API… I don’t think
the scaling overhead is big enough to consider on any viable SDL targets. (It
should be less than the overhead of calling the underlying API in all cases
but the RDTSC instruction style implementations.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 02:31, Jason Hoffoss wrote:

On Thursday 05 April 2001 15:09, you wrote:

On Thursday 05 April 2001 23:11, Sam Lantinga wrote:

Exactly. MAIA’s timestamped events (1/sample rate resolution; ie fractions of
ms, in 32 bit timestamps) rely on this, as well as various other pieces of
code I’ve written on various platforms through the years.

I’ve heard rumors about CPUs that don’t wrap this way when ADDing/SUBing, but
I have yet to see one for myself. Does anyone know of one?

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 03:01, Matt Slot wrote:

  return((signed) (time1 - time2));

Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around
the clock for days or weeks…

If you were to do something like this, then you could take liberties with
your own code (not SDL, but the code of your game/whatever) and do things
specific to the hardware you are using.

In cases like this, portability of your game/whatever would be the least of
your concerns :wink:

Yes, of course, but that doesn’t really motivate breaking things, does it?
:slight_smile:

And whoever is building that arcade machine may not be the one who wrote the
game engine…

(I’m not dissing the framerate issue… just thought I’d throw in an
observation).

I’d say your observation is more relevant to the frame rate issues (as in
enforcing a fixed rate at say 60 Hz, as is common for arcade machines) than
to the timer wrap issues.

The latter will occur on any hardware sooner or later, so you’re probably
better off doing things right from the start, rather than leaving it around
to stab you in the back.

(16 years of experience tought me to always code that way… Cheating never
pays. Seriously.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 03:07, Samuel Hart wrote:

On Thu, 05 Apr 2001, you wrote:

Maybe, but I’m not as sure that you could get information on how long a tick
is always. In linux, /proc/cpuinfo is the only source of this info as far as
I know, but what if /proc doesn’t exist for their kernel for whatever reason?
I’m not sure how you’d get it under windows either. Still would be nice to
try and get it at least, and return an error if it can’t be gotten.

The reason I suggested ticks as the last option was because I figure at
resolutions beyond 1 ms, you are most likely interested in comparing time
differences against other time differences, so the units wouldn’t really be
important. I’m probably just biased by not seeing another use for it. :slight_smile:
What I’d probably use it for is profiling, so I’d be interested to see, for
example, how long AI takes to run as a percentage of the frametime. For
doing this, it doesn’t matter what units you use, as long as both time spans
are calculated using the same unit of measure (ticks or whatever).

However, I know how to accomplish this task now in linux (thanks to the code
offered on this list earlier), and I’ve accomplished this in the past in
Win32, and these are the only 2 platforms I’ll probably ever develope on, so
this is good enough for me now. It really wouldn’t matter if it’s in the
SDL. Doubly so, in fact, since I’d only need this high precision timing for
development, and not for the final code. So I’m neutral on this subject now.
Just too bad, as always, that there isn’t an easier way for people to learn
about these tricks.On Thursday 05 April 2001 18:23, you wrote:

Thinking about it, you probably only need about 3 ranges: seconds,
milliseconds, or maximum precision (CPU ticks for example).

Seconds and ms sounds ok, but I’d prefer a fixed resolution (?s or ns) for
the last one - dealing with exposed platform dependent stuff all over the
code isn’t very nice. (However, you should be able to get info on what
kind of accuracy you can expect, at run time.)

Thinking about it, you probably only need about 3 ranges: seconds,
milliseconds, or maximum precision (CPU ticks for example).

Seconds and ms sounds ok, but I’d prefer a fixed resolution (?s or ns)
for the last one - dealing with exposed platform dependent stuff all over
the code isn’t very nice. (However, you should be able to get info on
what kind of accuracy you can expect, at run time.)

Maybe, but I’m not as sure that you could get information on how long a
tick is always. In linux, /proc/cpuinfo is the only source of this info as
far as I know, but what if /proc doesn’t exist for their kernel for
whatever reason?

You’d have to measure it one way or another. Testing the TSC against some
other timer should work, but you’d have to give it a few tries and extract
the best results in some way.

Might work with something like what I do in the retrace sync code; I
repeatedly measure the time between two retraces, and track the minimum
value. It hist the video refresh rate practically dead center on the machines
I’ve tried it on so far; the drift is low enough to sync to the timer for
minutes without adjusting with real retrace syncs. :slight_smile:

I’m not sure how you’d get it under windows either.

You don’t have to, as there is a performance counter API ready for use.

Still
would be nice to try and get it at least, and return an error if it can’t
be gotten.

Yes. I think it should be possible. Could even turn out to be reliable! :wink:

The reason I suggested ticks as the last option was because I figure at
resolutions beyond 1 ms, you are most likely interested in comparing time
differences against other time differences, so the units wouldn’t really be
important.

Yeah, that’s the case with the retrace stuff; the only unit I use is frames,
with 7 bits of fixed point decimals (nice unit, eh? ;-), and that’s based on
the initial refresh rate measurement.

I’m probably just biased by not seeing another use for it. :slight_smile:
What I’d probably use it for is profiling, so I’d be interested to see, for
example, how long AI takes to run as a percentage of the frametime.

Hmm… Yeah; beats the god ol’ “color bar” method. :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 05:09, Jason Hoffoss wrote:

On Thursday 05 April 2001 18:23, you wrote:

You’d have to measure it one way or another. Testing the TSC against some
other timer should work, but you’d have to give it a few tries and extract
the best results in some way.

I wonder how BIOS knows your cpu frequency?

Hmm… Yeah; beats the god ol’ “color bar” method. :slight_smile:

Too bad the good old color bar is not so easy with OpenGL since you
can’t poke the background color in realtime :smiley:

– Timo Suoranta – @Timo_K_Suoranta

SDL_GetTicks() on UNIX:
Uses gettimeofday()
Has a theoretical resolution of 1 microsecond
Has a real resolution of about 1 millisecond

gettimeofday() on Linux (at least) uses cycle counters if available.
exceptions are mainly non-synchronised SMP systems, which probably use some
kind of real-time clock or (at the lowest level) the jiffy counter.

thus as long as SDL_GetTicks() returns milliseconds there is little point
in changing the implementation by playing games with cycle counters

Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.

and often not — there are reasons not to use SDL at all in a game server
since no GUI may be needed

but your point is taken. Despite the problems with 64-bit ints on 32-bit
platforms (which should be familiar to anyone who has used gcc more than
casually) I think a 64-bit returning interface is the way to go,
and it enables higher resolution as well. The thing left to consider
is whether we should choose a fixed unit (say nanoseconds) or the
best available (the advantages/disadvantages are obvious)

we should still keep the current GetTicks() api since it doesn’t cost us
anything to do it, 32-bit is convenient for non-server code, and
ms is a quite handy unit for game timing since it covers the entire
scale of human interaction/response well

Sorry to be joining the conversation so late. This may be what you are
discussing, I have built my program around a the SDL_GetTicks() function.
Everything is scheduled to go after a certain amount of ticks has passed.
Yesterday Sam mentioned that a “tick” may be different lengths depending on
your OS. Do you imagine the difference is enought to throw multiplayer (cross
platform) out of sync?
thanks
Jaren Peterson

Sorry to be joining the conversation so late. This may be what you are
discussing, I have built my program around a the SDL_GetTicks() function.
Everything is scheduled to go after a certain amount of ticks has passed.
Yesterday Sam mentioned that a “tick” may be different lengths depending on
your OS.

I might be missing something, but I believe one ms is one ms is one ms… :wink:

If you get something “significantly” different, it’s either a bug, or broken
hardware. Now, some platforms may well have inaccurate "multimedia timer"
implementations…

Anyway, what is “significant”, in this context…?

Do you imagine the difference is enought to throw multiplayer
(cross platform) out of sync?

Hmm… PCs seem to drift up to a few minutes per month in my experience.
That’s a few ms per minute, which may well be a serious problem in a fast
paced game with long sessions…

I’d guess it’s a good idea to sync the local game time of the clients to the
game server. :slight_smile:

Add a global offset to every timestamp you read, and adjust that offset by a
few ms a few times per minute, or something like that.

sync
{
get timestamp A;
query server time;
await reply;
get timestamp B; //also our current local time
server_time = reply.time + (B - A) / 2;
local_time_offset += B - server_time;
}

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 16:48, Jaren Peterson wrote: