Minimum timeslice fun (was Re: semaphores and mutexes)

Unfortunately, this probably isn’t ideal due to minimum-timeslice in the
operating system’s scheduler – on i386 Linux, for example, you can’t delay
for less than 10ms – not fine-grained enough for nice animation timing. In
IceBreaker, I just sleep 10ms every loop, which works well enough but
obviously means that the game doesn’t actually run at the same speed on
different systems. Skipping delays every so often makes the animation jerky.
Anyone have a clever solution for this?On Tue, May 28, 2002 at 09:42:56AM -0500, Darrell Walisser wrote:

Some games I’ve seen don’t use timers to control animation. They either
fix the framerate by SDL_Delay() the right amount every frame or they


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

In IceBreaker, I just sleep 10ms every loop, which works well enough
but
obviously means that the game doesn’t actually run at the same speed on
different systems. Skipping delays every so often makes the animation
jerky.
Anyone have a clever solution for this?

I don’t know if I would call this clever, but I never use sleep()… just
have your application loop continiously, peeking at the time every loop and
keep a time ordered queue of everything that needs to occur at a specific
time. When the top of the queue has something that is <= the current time,
execute whatever task it is at the top of the queue, get your time again
(since the task you just did consumed some time) pop off the top of the
queue and check to see if the next item is <= the current time. When you’ve
run out of queue items that need to be performed you are automatically back
in the busy loop peeking at time again and comparing it with the top of the
queue.

This requires that you create some structure that is a container for all the
parameters to a subroutine call, a pointer to the subroutine call and a
timestamp. That is what goes into your queue, sorted by the timestamp.

This will eat 100% of your CPU, but it is easy, elegant. When executing real
time multimedia applications, I give up on the idea of sharing the CPU.

Note there are no threads, yet multiple processing contexts are capable of
executing with this scheme. There is no need for semiphores either. The only
place I block executing things from the queue is when I execute a physics
update- because that spawns a bunch of new entries into the queue- I let
them all queue and sort correctly into the queue before I let the queue
execute any more logic.

For complete honesty, I actually have three queues: 1) execute every loop 2)
execute next loop 3) execute at time

This is the inner loop for the entire simulation portion of the application-
capable of handling movies, real time animation, game play… pretty much
everything after startup and init.

-BlakeFrom: mattdm@mattdm.org (Matthew Miller)
Sent: Tuesday, May 28, 2002 8:15 AM
Subject: minimum timeslice fun (was Re: [SDL] semaphores and mutexes)

I don’t know if I would call this clever, but I never use sleep()… just
have your application loop continiously, peeking at the time every loop and
keep a time ordered queue of everything that needs to occur at a specific
time. When the top of the queue has something that is <= the current time,

Right, that’s not clever, that’s burning up the CPU and annoying users who
might look at top. :slight_smile:

This will eat 100% of your CPU, but it is easy, elegant. When executing real
time multimedia applications, I give up on the idea of sharing the CPU.

Easy, yeah. Elegant, oh no.

For a full-screen extravaganza, okay, maybe you can get away with this. For
a little puzzle game or somesuch, doesn’t seem good.On Tue, May 28, 2002 at 08:53:59AM -0700, Blake Senftner wrote:


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

I don’t know if I would call this clever, but I never use sleep()…
just

have your application loop continiously, peeking at the time every loop
and

keep a time ordered queue of everything that needs to occur at a
specific

time. When the top of the queue has something that is <= the current
time,

Right, that’s not clever, that’s burning up the CPU and annoying users who
might look at top. :slight_smile:

This will eat 100% of your CPU, but it is easy, elegant. When executing
real

time multimedia applications, I give up on the idea of sharing the CPU.

Easy, yeah. Elegant, oh no.

For a full-screen extravaganza, okay, maybe you can get away with this.
For
a little puzzle game or somesuch, doesn’t seem good.


Matthew Miller mattdm at mattdm.org http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

So modify this technique a bit… at startup figure out what is the max time
is takes for you to get control back from a sleep(0). Then make the busy
loop keep control as long as the top of the queue is <= (currTime +
maxSleepZeroReturnTime + 1). When you have more time than
(maxSleepZeroReturnTime + 1) do a sleep(0). You’ll have control for the
duration that you need it and still give other processes a chance to
execute.

Also, you’d probably want to keep an eye on maxSleepZeroReturnTime to make
sure that as other processes spawn and die it remains accurate. (Meaning
that it should probably be a dynamic value.)

BTW, if Prof. Glen Bresnham is still at the B.U. Graphics Lab, try asking
him what he would do. The guy is brilliant and I’m sure that he’s faced the
same issue multiple times.

-Blake
(B.U. alum '88, Graphics Lab staff '85 - '88.)From: mattdm@mattdm.org (Matthew Miller)
Sent: Tuesday, May 28, 2002 8:54 AM
Subject: Re: minimum timeslice fun (was Re: [SDL] semaphores and mutexes)

On Tue, May 28, 2002 at 08:53:59AM -0700, Blake Senftner wrote:

Also, you’d probably want to keep an eye on maxSleepZeroReturnTime to make
sure that as other processes spawn and die it remains accurate. (Meaning
that it should probably be a dynamic value.)

Spawning and dying of other processes isn’t the only problem. Sleep (0)
gives remaining time of process slice to other processes, right? Well, if no
processes need work, is returns immediatly. So the time sleep (0) takes,
depends per schedule cycle!

Anyhow, calling Sleep is not a good solution I guess. Because with Sleep you
give control to the OS. Even on preemptive systems it is not guaranteed that
it comes back in time, because of possible higher priorities of other
processes.

As mentioned before, if you have a puzzle-like game, use an event-driven
system, with is friendly for the CPU. If you write a game which should pump
60 fps, you shouln’t worry if your printed document comes out in 10 or 20
seconds :slight_smile: You then should use interpolated animation…

Patrick.> ----- Original Message -----

From: bsenftner@earthlink.net (Blake Senftner)
To:
Sent: Tuesday, May 28, 2002 10:53 AM
Subject: Re: minimum timeslice fun (was Re: [SDL] semaphores and mutexes)

From: “Matthew Miller”
Sent: Tuesday, May 28, 2002 8:54 AM
Subject: Re: minimum timeslice fun (was Re: [SDL] semaphores and mutexes)

On Tue, May 28, 2002 at 08:53:59AM -0700, Blake Senftner wrote:

I don’t know if I would call this clever, but I never use sleep()…
just

have your application loop continiously, peeking at the time every
loop
and

keep a time ordered queue of everything that needs to occur at a
specific

time. When the top of the queue has something that is <= the current
time,

Right, that’s not clever, that’s burning up the CPU and annoying users
who

might look at top. :slight_smile:

This will eat 100% of your CPU, but it is easy, elegant. When
executing
real

time multimedia applications, I give up on the idea of sharing the
CPU.

Easy, yeah. Elegant, oh no.

For a full-screen extravaganza, okay, maybe you can get away with this.
For
a little puzzle game or somesuch, doesn’t seem good.


Matthew Miller mattdm at mattdm.org
http://www.mattdm.org/

Boston University Linux ------>
http://linux.bu.edu/

So modify this technique a bit… at startup figure out what is the max
time
is takes for you to get control back from a sleep(0). Then make the busy
loop keep control as long as the top of the queue is <= (currTime +
maxSleepZeroReturnTime + 1). When you have more time than
(maxSleepZeroReturnTime + 1) do a sleep(0). You’ll have control for the
duration that you need it and still give other processes a chance to
execute.

Also, you’d probably want to keep an eye on maxSleepZeroReturnTime to make
sure that as other processes spawn and die it remains accurate. (Meaning
that it should probably be a dynamic value.)

BTW, if Prof. Glen Bresnham is still at the B.U. Graphics Lab, try asking
him what he would do. The guy is brilliant and I’m sure that he’s faced
the
same issue multiple times.

-Blake
(B.U. alum '88, Graphics Lab staff '85 - '88.)


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

If your hardware can handle 200fps, and you’re using up the whole CPU to do
60, that’s pretty annoying, yeah? :)On Tue, May 28, 2002 at 09:00:26PM -0700, Patrick Kooman wrote:

system, with is friendly for the CPU. If you write a game which should pump
60 fps, you shouln’t worry if your printed document comes out in 10 or 20
seconds :slight_smile: You then should use interpolated animation…


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

Also, you’d probably want to keep an eye on maxSleepZeroReturnTime to
make
sure that as other processes spawn and die it remains accurate.
(Meaning
that it should probably be a dynamic value.)

Spawning and dying of other processes isn’t the only problem. Sleep (0)
gives remaining time of process slice to other processes, right? Well,
if no
processes need work, is returns immediatly. So the time sleep (0) takes,
depends per schedule cycle!

Anyhow, calling Sleep is not a good solution I guess. Because with
Sleep you
give control to the OS. Even on preemptive systems it is not guaranteed
that
it comes back in time, because of possible higher priorities of other
processes.

If the OS has real-time threads, you can get a better guarantee. Still
not a complete guarantee, but I’ve seen it smooth out a choppy game. You
assign the thread a period (how often it needs to do work) and a
duration (how long it needs to get work done in each cycle), and the
scheduler tries to accommodate the thread. You might be able give a
large duration and yield when you are finished work and the scheduler
will try to honor the period you set. I know Mac OS X can do this but I
don’t know about other systems. Sounds like a nice feature for SDL 1.3
if we can support it on several OS’s.

-DOn Tuesday, May 28, 2002, at 11:00 PM, Patrick Kooman wrote:

I don’t know if I would call this clever, but I never use sleep()…
just have your application loop continiously, peeking at the time
every loop and keep a time ordered queue of everything that needs to
occur at a specific time. When the top of the queue has something
that is <= the current time,

Right, that’s not clever, that’s burning up the CPU and annoying users
who might look at top. :slight_smile:

Those users should help me implementing proper retrace sync’ed
pageflipping on Linux. :wink:

(Lack of retrace sync is the primary reason for this CPU abuse in the
first place. SDL_FlipSurface() should block if there’s no unused page
available, just as it does on some platforms.)

This will eat 100% of your CPU, but it is easy, elegant. When
executing real time multimedia applications, I give up on the idea of
sharing the CPU.

Easy, yeah. Elegant, oh no.

Right. There is no elegant solution that works on all platforms.

For a full-screen extravaganza, okay, maybe you can get away with this.
For a little puzzle game or somesuch, doesn’t seem good.

…unless you want the smoothest possible animation in the puzzle
game, that is. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 28 May 2002 17:54, Matthew Miller wrote:

On Tue, May 28, 2002 at 08:53:59AM -0700, Blake Senftner wrote:

Problem is that most systems will consider your thread a CPU hog if it
never blocks, and as a result, when the system decides to preempt your
thread, it may not give the CPU back in “ages”. When you refuse to
explicitly let go of the CPU, a queue of other runnable threads builds
up, and if there are enough of them, that could be enough to cause
serious timing problems.

That said, in real life, this seems to be a lot less of a problem than
one might expect. Never releasing the CPU voluntarily seems to work
pretty well.

(And of course, a proper video driver will “fix” the problem by blocking
on the retrace or “page available” event whenever you call
SDL_FlipSurface(). That is, depending on what platform you’re using, you
may never actually see this problem.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 29 May 2002 06:00, Patrick Kooman wrote:

Also, you’d probably want to keep an eye on maxSleepZeroReturnTime to
make sure that as other processes spawn and die it remains accurate.
(Meaning that it should probably be a dynamic value.)

Spawning and dying of other processes isn’t the only problem. Sleep (0)
gives remaining time of process slice to other processes, right? Well,
if no processes need work, is returns immediatly. So the time sleep (0)
takes, depends per schedule cycle!

Anyhow, calling Sleep is not a good solution I guess. Because with
Sleep you give control to the OS. Even on preemptive systems it is not
guaranteed that it comes back in time, because of possible higher
priorities of other processes.

Yeah. Please, propose a way to avoid it, without just screwing up
animation. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 28 May 2002 21:11, Matthew Miller wrote:

On Tue, May 28, 2002 at 09:00:26PM -0700, Patrick Kooman wrote:

system, with is friendly for the CPU. If you write a game which
should pump 60 fps, you shouln’t worry if your printed document comes
out in 10 or 20 seconds :slight_smile: You then should use interpolated
animation…

If your hardware can handle 200fps, and you’re using up the whole CPU
to do 60, that’s pretty annoying, yeah? :slight_smile:

Is this theoretically possible in windowed mode in X? My guess is “no”…On Thu, May 30, 2002 at 08:10:21PM +0200, David Olofson wrote:

(And of course, a proper video driver will “fix” the problem by blocking
on the retrace or “page available” event whenever you call
SDL_FlipSurface(). That is, depending on what platform you’re using, you
may never actually see this problem.)


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

It’s not possible at all in X, unless you’re using one of the few
drivers that implement it.

As to retrace sync in windowed mode, there are basically two kinds of
implementation;

Double buffered "desktop"On Thursday 30 May 2002 20:13, Matthew Miller wrote:

On Thu, May 30, 2002 at 08:10:21PM +0200, David Olofson wrote:

(And of course, a proper video driver will “fix” the problem by
blocking on the retrace or “page available” event whenever you call
SDL_FlipSurface(). That is, depending on what platform you’re using,
you may never actually see this problem.)

Is this theoretically possible in windowed mode in X? My guess is
"no"…


Incredibly messy, unless there’s full hardware windowing support. (If the
windows are composed into a full display by the CRTC, there’s not really
a problem, as every window has it’s own VRAM pointer, which you can use
for pageflipping just as in fullscreen mode.) Without h/w support (ie on
pretty much any video card out there), the server has to deal with
multiple applications trying to do graphics in different ways, at
different frame rate at the same time, on the same screen.

“Explicit retrace sync”

Requires some way of performing blits in very tight sync with the raster.
(Much tighter than with pageflipping, as you need to make sure that
you’re not blitting where the raster beam is.) Very easy to do if the
video accelerator has a generic raster sync command - but it seems like
that’s not the case with most cards.

AFAIK, some cards have a command for “pageflie with retrace sync”. On
those, one could probably implement a fake “explicit retrace sync” by
performing a dummy flip to the same page. Other than that, it’s either
busywaiting on some port, or implementing something that tracks the video
timing by occasionally taking peeks at the retrace status. (The latter
requires real time scheduling accurate to within one video frame or
better to be of much use. You need that to actuall do something at the
right time - knowing exactly what “the right time” is isn’t enough.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -’

I think the main concern here seems to be not burning
cycles rendering extra unused (or unusable) frames…
For software surfaces I think the answer seems to be
obvious: Delay by the remainder of the inverse of the
frame rate each time SDL_Flip() is called…

i.e. query the video system to find that the surface
is displayed on a screen running at (let’s say) 80
Hz… 80 Hz == 12.5 ms per frame. So every time
SDL_Flip() is called on a software surface, SDL could
save the tick-count. If 12 ticks haven’t elapsed
since the previous SDL_Flip, then sleep until they
had…

Comments?

-Loren

— david.olofson at reologica.se wrote:> It’s not possible at all in X, unless you’re using

one of the few drivers that implement it.

As to retrace sync in windowed mode, there are
basically two kinds of implementation;

Double buffered “desktop”

Incredibly messy, unless there’s full hardware
windowing support. (If the windows are composed
into a full display by the CRTC, there’s not
really a problem, as every window has it’s own
VRAM pointer, which you can use for pageflipping
just as in fullscreen mode.) Without h/w support
(ie on pretty much any video card out there),
the server has to deal with multiple
applications trying to do graphics in different
ways, at different frame rate at the same time,
on the same screen.

“Explicit retrace sync”

Requires some way of performing blits in very
tight sync with the raster. (Much tighter than
with pageflipping, as you need to make sure
that you’re not blitting where the raster beam
is.) Very easy to do if the video accelerator
has a generic raster sync command - but it
seems like that’s not the case with most cards.

AFAIK, some cards have a command for “pageflie
with retrace sync”. On those, one could probably
implement a fake “explicit retrace sync”
by performing a dummy flip to the same page.
Other than that, it’s either busywaiting on
some port, or implementing something that tracks
the video timing by occasionally taking peeks at
the retrace status. (The latter requires real
time scheduling accurate to within one video
frame or better to be of much use. You need that
to actuall do something at the right time -
knowing exactly what “the right time” is isn’t
enough.)

//David Olofson — Programmer, Reologica
Instruments AB


Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup

Yeah, but how do you propose to sleep for, say, 4ms?On Fri, May 31, 2002 at 02:51:10AM -0700, Loren Osborn wrote:

i.e. query the video system to find that the surface
is displayed on a screen running at (let’s say) 80
Hz… 80 Hz == 12.5 ms per frame. So every time
SDL_Flip() is called on a software surface, SDL could
save the tick-count. If 12 ticks haven’t elapsed
since the previous SDL_Flip, then sleep until they
had…
Comments?


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

I think the main concern here seems to be not burning
cycles rendering extra unused (or unusable) frames…
For software surfaces I think the answer seems to be
obvious: Delay by the remainder of the inverse of the
frame rate each time SDL_Flip() is called…

Well, that’s exactly what a proper retrace sync’ed implementation of the driver’s “flip” operation does.

i.e. query the video system to find that the surface
is displayed on a screen running at (let’s say) 80
Hz… 80 Hz == 12.5 ms per frame. So every time
SDL_Flip() is called on a software surface, SDL could
save the tick-count. If 12 ticks haven’t elapsed
since the previous SDL_Flip, then sleep until they
had…

Comments?

The first problem with that is that you can’t do it without proper real time scheduling and high resolution timers. It could be done on most platforms, though. (Multimedia timers or Win32, RTC driver on Linux etc.)

The second problem is the big one: Since there’s no synchronization between the actual refesh rate and your loop, you’ll have terrible tearing that slowly drifts over the screen. Seen that during some experiments, and I’d say it looks a lot worse than normal, “random” tearing.

Turn it into a PLL that locks on the refresh rate by occasionally looking for the retrace, and we’re in business. This is what I’m trying to do on Linux, for use with various drivers. (Drivers will need to occasionally timestamp retraces and pass the data to a daemon, which will then keep track of video timing using a real time thread driven by the RTC, or other suitable IRQ source.)

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Fri, 31/05/2002 02:51:10 , Loren Osborn <linux_dr at yahoo.com> wrote:

i.e. query the video system to find that the surface
is displayed on a screen running at (let’s say) 80
Hz… 80 Hz == 12.5 ms per frame. So every time
SDL_Flip() is called on a software surface, SDL could
save the tick-count. If 12 ticks haven’t elapsed
since the previous SDL_Flip, then sleep until they
had…

The problem is in sleeping precise amounts of time…especially on Linux,
where a nap results in at least a 10ms block.

Also, I’m not sure if there’s a good way to determine screen refresh rate
at all on most systems, let alone portably.

–ryan.

[…]

The problem is in sleeping precise amounts of time…especially on Linux,
where a nap results in at least a 10ms block.

Actually, most platforms have the same problem - and AFAIK, most platforms also provide one or more solutions.

The only real issue with Linux is that the RTC has a 64 Hz limit for normal users, which makes the timer next to useless for normal applications. Meanwhile, it’s perfectly possible to tell a sound card to generate thousands of IRQs per second, so I can’t really see a defense for this 64 Hz limit. I’d say 1024 Hz would be a sensible limit.

Also, I’m not sure if there’s a good way to determine screen refresh rate
at all on most systems, let alone portably.

Well, if there’s some retrace syncing, blocking call, you can always time it, although that’s not exactly the most reliable method…

DirectX at least has a proper API for it, but it doesn’t seem to be too reliable either. (Always worked in my video cards, but people have reported weirdness.)

Most Linux targets seem to have some way of querrying modes (including refresh rates), but I haven’t dug into the details so far. (Kind of pointless when you can’t sync anyway… heh)

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Fri, 31/05/2002 11:27:16 , Ryan C. Gordon wrote:

On Linux on the Alpha processor, the minimum timeslice is 1 ms (actually
slightly less). There was some talk on the kernel list of making this be the
case on x86 too, now that processors are faster, but that doesn’t seem to
have gone anywhere. (There would be a little more overhead of course…)On Fri, May 31, 2002 at 07:13:12PM +0200, David Olofson wrote:

Actually, most platforms have the same problem - and AFAIK, most platforms
also provide one or more solutions.


Matthew Miller @Matthew_Miller http://www.mattdm.org/
Boston University Linux ------> http://linux.bu.edu/

100 Hz is rather sufficient for most things, it seems. Keep in mind that this is not the absolute granularity of scheduling - HZ affects only preemptive/timesharing rescheduling, and - unfortunately - the software timers.

IMHO, it would be more efficient and cleaner to change the RTC limit from 64 Hz to 1024 Hz for normal users, and perhaps change the driver interface to support multiple open and sharing for the blocking RTC timer feature. (The latter could be done with a library in user space, but that would result in any applications using the driver directly either failing or stealing the timer.)

I’m seriously considering hacking up a proposal for this, since it would be very useful for serious MIDI applications and the like, as well as for this retrace sync daemon.

Windows has multimedia timers that any application can use, but Linux has nothing that non-root applications can access. (Unless you feel like abusing an audio card or something, that is.)

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Fri, 31/05/2002 13:19:09 , Matthew Miller wrote:

On Fri, May 31, 2002 at 07:13:12PM +0200, David Olofson wrote:

Actually, most platforms have the same problem - and AFAIK, most platforms
also provide one or more solutions.

On Linux on the Alpha processor, the minimum timeslice is 1 ms (actually
slightly less). There was some talk on the kernel list of making this be the
case on x86 too, now that processors are faster, but that doesn’t seem to
have gone anywhere. (There would be a little more overhead of course…)