MacOS SDL_GetTicks()

The current MacOS SDL_GetTicks() has a precision of 1/60sec, which is
bad.
Current MacOS lib provide a mor precise function,
that can be used this way:

Uint32 SDL_GetTicks(void)
{
UnsignedWide Now;
Microseconds(&Now);
return (Now.lo>>10);
}

It works on my computer, do you see any bug on it ?

Luc-Olivier

Luc-Olivier de Charri?re wrote:

The current MacOS SDL_GetTicks() has a precision of 1/60sec, which is
bad.
Current MacOS lib provide a mor precise function,
that can be used this way:

Uint32 SDL_GetTicks(void)
{
UnsignedWide Now;
Microseconds(&Now);
return (Now.lo>>10);
}

It works on my computer, do you see any bug on it ?

Microseconds() is the official modern way to get time on the
Mac, but it looks like you’re dividing by 1024 rather than 1000,
which is not a very accurate conversion to milliseconds. You
could also freak out a program when the low part of Now wraps
around. There’s really no way around doing a genuine 64-bit / 1000
divide here, although since the divisor is fixed, there are
opportunities for optimization if you’re willing to do the logic.

Stan
@Stan_Shebs

An alternative to this would be UpTime which provides the time since
system startup in nanoseconds :slight_smile:

If that isn’t accurate enough, cough.

the code would change to:

static volatile UnsignedWide beginTime;

/**

  • This function is taken from the multiprocessing example on page 147
    (Appendix b ) from
  • the MultiprocessingServices2.1.pdf which is included in Apples
    Multiprocessing SDK available
  • at www.apple.com/developer
    **/
    static unsigned long long HowLong( AbsoluteTime endTime, AbsoluteTime
    bgnTime )
    {
    AbsoluteTime absTime;
    Nanoseconds nanosec;
    unsigned long long value;
    absTime =SubAbsoluteFromAbsolute(endTime,bgnTime);
    nanosec =AbsoluteToNanoseconds(absTime);
    value = (((unsigned long long)nanosec.hi)<<32) + nanosec.lo;
    return value;
    }

void SDL_StartTicks(void)
{
beginTime = UpTime();
}

Uint32 SDL_GetTicks(void)
{
AbsoluteTime endTime = UpTime();
unsigned long usecs = HowLong ( endTime, beginTime)/1000000;
return usecs;
}

The bug on using Microseconds is that a preemptive thread created using
the MPLibrary termintes when calling this function
or even Delay. Thats strange I think.

The function is provided in the DriverServices.h, but I’m not sure if it
is supported on 68k Macs.

Sven.

Stan Shebs schrieb:> Luc-Olivier de Charri?re wrote:

The current MacOS SDL_GetTicks() has a precision of 1/60sec, which is
bad.
Current MacOS lib provide a mor precise function,
that can be used this way:

Uint32 SDL_GetTicks(void)
{
UnsignedWide Now;
Microseconds(&Now);
return (Now.lo>>10);
}

It works on my computer, do you see any bug on it ?

Microseconds() is the official modern way to get time on the
Mac, but it looks like you’re dividing by 1024 rather than 1000,
which is not a very accurate conversion to milliseconds. You
could also freak out a program when the low part of Now wraps
around. There’s really no way around doing a genuine 64-bit / 1000
divide here, although since the divisor is fixed, there are
opportunities for optimization if you’re willing to do the logic.

Stan
shebs at shebs.cnchost.com

Sven Schaefer wrote:

An alternative to this would be UpTime which provides the time since
system startup in nanoseconds :slight_smile:

The problem is that this is only available for rather recent
OS versions - more recent than the Mac I’m typing this on,
for instance!

The bug on using Microseconds is that a preemptive thread created using
the MPLibrary termintes when calling this function
or even Delay. Thats strange I think.

That is strange - did you read this, or find out experimentally?
Killing a thread seems like a pretty serious bug, hard to imagine
that this hasn’t been fixed by now. I’ll check in Radar tomorrow,
see if there’s any more info on this one.

The function is provided in the DriverServices.h, but I’m not sure if it
is supported on 68k Macs.

I’m pretty sure it’s not available.

Stan

Found the bug experimantally. I implemented some kind of working preemptive
thread support for
the Mac using the MPLibrary, but this problem did show up, so I played around
with the testthread
example in the SDL. Had some problems with Uptime, too, but these were my
fault (stupid type cast bug :slight_smile: ).

BTW: are threads supported on 68k Macs ? Did you ever program threads ?

The thing is that I know no proper way to debug threads so perhaps these
functions only caused a lockup of
the thread and not it’s termination(perhaps beeing non thread safe ?), but I
am not able to debug threads
using Codewarrior, and preemptive threads don’t show up in the processes
window of CW.

Perhaps you know a way ?

I already sent my source to Darrel, perhaps you’d like it too ? Hm. What Mac
do you program on ?

Sven

Stan Shebs wrote:> > The bug on using Microseconds is that a preemptive thread created using

the MPLibrary termintes when calling this function
or even Delay. Thats strange I think.

That is strange - did you read this, or find out experimentally?
Killing a thread seems like a pretty serious bug, hard to imagine
that this hasn’t been fixed by now. I’ll check in Radar tomorrow,
see if there’s any more info on this one.

The function is provided in the DriverServices.h, but I’m not sure if it
is supported on 68k Macs.

I’m pretty sure it’s not available.

Stan

Microseconds() is the official modern way to get time on the
Mac, but it looks like you’re dividing by 1024 rather than 1000,

That’s right.
I’m currently working on optimising a programe, so I
tryed to make it as fast as possible, that’s why I choosed this way.

here’s really no way around doing a genuine 64-bit / 1000
divide here, although since the divisor is fixed, there are
opportunities for optimization if you’re willing to do the logic.

I’m going to make it cleaner, so it warp round after 32 bits,
and it a /1000 operation.

Luc-Olivier

Make it work, make it right, THEN make it fast.On Tue, Sep 19, 2000 at 11:33:22PM +0200, Luc-Olivier de Charri?re wrote:

Microseconds() is the official modern way to get time on the
Mac, but it looks like you’re dividing by 1024 rather than 1000,

That’s right.
I’m currently working on optimising a programe, so I
tryed to make it as fast as possible, that’s why I choosed this way.


“Isn’t vi that text editor with two modes… one that beeps and one
that corrupts your file?” – Dan Jocabson, on comp.os.linux.advocacy

Make it work, make it right, THEN make it fast.

Also, “don’t sweat the small stuff” and profile

I get the current time once per frame. It could take 10 times longer
(actually I think mine probably does what with all the frame rate
calculations), and it wouldn’t affect the overall program speed. Just do
the divide and forgettaboutit.

–Manny

In article <39C6FCE2.865E695B at shebs.cnchost.com>, sdl at lokigames.com wrote:

The bug on using Microseconds is that a preemptive thread created using
the MPLibrary termintes when calling this function
or even Delay. Thats strange I think.

That is strange - did you read this, or find out experimentally?
Killing a thread seems like a pretty serious bug, hard to imagine
that this hasn’t been fixed by now. I’ll check in Radar tomorrow,
see if there’s any more info on this one.

MPThreads cannot call the Toolbox, Device Manager, File Manager, or anything
that may make a 68k mixed mode switch – because the emulator sits low on the
hardware and is single threaded.

QuickTime puts a 68k patch on Microseconds for nefarious reasons of its own,
and this is what is killing your thread. You will need to use UpTime() or
something similar to get the time from a preemptive thread. Take a look at
my FastTimes library for a good example:

http://www.ambrosiasw.com/~fprefect/FastTimes.sit.bin

Sam, did you ever get the ASM to compile under MPW?

Matt–
/* Matt Slot * Bitwise Operator * http://www.ambrosiasw.com/~fprefect/ *

  • "Did I do something wrong today or has the world always been like this *
  • and I’ve been too wrapped up in myself to notice?" - Arthur Dent/H2G2 */

I’m going to make it cleaner, so it warp round after 32 bits,
and it a /1000 operation.

What about this one:

Uint32 SDL_GetTicks(void)
{
UnsignedWide nanoSecWide;
UInt64 nanoSecLong, nanoSecLongHi;
Microseconds(&nanoSecWide);
nanoSecLongHi = nanoSecWide.hi;
nanoSecLong = nanoSecWide.lo | nanoSecLongHi<<32;
return (nanoSecLong/1000 );
}

I where unable to avoid the use of Uint64 to keep the 32bit-long-warp.
My compiler (CodeWarrior5) do that correctly, and it works on my
computer.
I’m waiting for new comments. (too slow…perhaps)

Does someone found a way to prevent threads to colapse
while using Microseconds() ?

Luc-Olivier

Sam, did you ever get the ASM to compile under MPW?

No. The latest CVS snapshot uses your code for the CodeWarrior project,
and the old code for the MPW build, but I’d like to change that as soon
as somebody figures out how. :slight_smile:

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software