if time.current is an int, lets say, then sizeof(time.current) = 4.
2^4 = 16. I think what you meant was 256^sizeof(time.current).
Anyway, it all depends on your application. For a game, running for
49+ days in one session generally isn’t a problem. I personally have
never played a game for 49+ straight in one session, and I’ve played
a lot of games.
Game servers however should be able to run more than 50 days in a row
though, and they are often built on subsets of the client code.
And someone might want to use a game as a long-term stress test…
Seriously, what about SDL based arcade game machines, built on affordable
"standard" PC components? They could very well be expected to run around
the clock for days or weeks…
You never know how people are going to use you software, so it’s
generally a good idea to handle this kind of stuff nicely.
You are both correct, however I doubt you’d need millisecond precesion over
time spans that long.
Right, but that’s not it; the problem is that if you don’t handle wrapping
correctly, your program might freeze for a good while, or even crash.
(BTW, forget about true ms precision in relation to wall clock time over more
than a few hours on any normal hardware; the standard oscillators just aren’t
up to that kind of stability.
If you want to timestamp events, such as when a game
started, then to the nearest second is probably fine. For that, you
probably wouldn’t be using ticks. Usually you use ticks to track time
between 2 events, and these 2 events usually aren’t going to be all that
far apart.
Exactly - this is why wrapping shouldn’t be a problem. Just make sure you’re
using the same data type throughout the timing calculations, and wrapping
will be handled automatically thanks to the nature of the ALUs of pretty much
all CPUs used these days.
You do bring up an interesting point, though. SDL only has one mechanism
for tracking time, and that is fixed at milliseconds. It might be better
if you could specify the units you are interested in tracking instead, and
that you could expect the full range to be used. For example, if I want my
results in seconds, then it wouldn’t calculate it in milliseconds and just
multiply by 1000, because I would lose the upper value range due to the
wrap around occuring sooner.
Sounds nice, but it doesn’t come for free…
Thinking about it, you probably only need about 3 ranges: seconds,
milliseconds, or maximum precision (CPU ticks for example).
Seconds and ms sounds ok, but I’d prefer a fixed resolution (?s or ns) for
the last one - dealing with exposed platform dependent stuff all over the
code isn’t very nice. (However, you should be able to get info on what kind
of accuracy you can expect, at run time.)
The maximum
precision options would have to be platform specific, and would be in some
arbitrary units that you may or may not be able to convert to some
variation of seconds. Just an idea.
Indeed, I’m counting frame duration in raw CPU cycles in my retrace sync
hacks, but it’s hardly what I’d like to see in a serious API… I don’t think
the scaling overhead is big enough to consider on any viable SDL targets. (It
should be less than the overhead of calling the underlying API in all cases
but the RDTSC instruction style implementations.)
//David
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |
--------------------------------------> david at linuxdj.com -'On Friday 06 April 2001 02:31, Jason Hoffoss wrote:
On Thursday 05 April 2001 15:09, you wrote:
On Thursday 05 April 2001 23:11, Sam Lantinga wrote: