I think for stable real-time operation on Linux,
(eg. you are doing a backup in the background and want to use the restant CPU
to play your favorite foobar game at ful speed, which is supposed to use less
than 100% of CPU),
we really need to get the X server out of the 2D rendering cycle during the
It would even be cool being able to mmap() the framebuffer into mem,
even while you are in windowed mode (no fullscreen), and then do your
Ok, I know this can let some garbage on the screen sometimes when you move
windows, just like kwinTV which maps the TV card’s picture into the
framebuffer mem. (sometime when I overlap the TV window with another windows,
when moving the other window, a part of the TV picture remains on the latter)
my wishes (on Linux):
a set of drivers which allows full direct access to 2D hardware, without any
intervention of the X server (since the X server is run with SCHED_OTHER as a
regular process, and performance will suck when you stress your system with
other background load).
Maybe the ideal situation would be a set functions to get access (mmap,DMA) to
the gfx card’s framebuffer mem, but not implemented as a client/server system
but as a library, that means a SCHED_FIFO process (realtime-process)
(assuming the low-latency extensions get into the official kernel) would
receive full graphics performance , no matter if you have heavy system load
in the background or not.
Of course these accelerations would be really nice in windowed-mode too,
(with the method described above,manaul clipping), so that the user experiences
full performance , independently from the fact whether he is working in
fullscreen mode or not.
making the X server “realtime-capable”:
separate the “deterministic’ functions from the
That means the X server should run with 2 threads,
one doing the “slow things” (running with normal priority), like memory
allocation/deallocation, font managing etc.
The other doing the rendering , which should be run with SCHED_RR lowest
priority. In this case the X server can pre-empt other “disturbing” tasks , and
let you achieve smooth performance when doing X11 rendering, independently
from the background load (disk I/O, CPU etc) you put on your machine.
But that is not enough: to achieve the above, the X11 dispatching loop
(the one select()'ing on the client’s filedescriptors) has to
be modified in order to support a prioritized event system.
eg. you have an oscilloscope simulation which should be as smooth as possible,
even if you run x11perf at the same time:
run your app with SCHED_FIFO / SCHED_RR
tell the X server you need higher priority for X11 events
the X server rendering process which is supposed to run SCHED_RR, now knows
that your app’s X11 messages have to be processed with higher priority than
the ones of x11perf.
The X server could simply use some prioritization mechanism, (round-robing,
fifo, etc) to defer x11perf X11 messages in order to process the ones of your
With the above two methods, one could design games using the first method,
and design “GUI” apps (the ones using standard X11 calls), both with best
possible graphics performance, independendly of background activity.
Can the SGI do similar things ?
(I know the SGI realtime-scheduling is quite good (I heard that audio with 5-7ms
latency is possible even on older machines, but I don’t know if the
realtime-issues apply to video too)
Benno.On Tue, 28 Mar 2000, Dan Maas wrote:
Yeah, I know… And isn’t there a “below the 16 megs limit” thing for
DMA also? Or is this only 8 bit DMA or something like that?
Only for ISA DMA. PCI cards have access to all physical RAM, although you
still need to worry about contiguous buffers. AGP chipsets do their own
scatter/gather so you don’t even need that. The GART kernel module nicely
exports this functionality.
I think the DRI kernel module has some way to let processes mmap it to
get DMA buffers, with authentication through the X server using
ioctl()s. It is being touted very loudly as a 3D/OpenGL solution, but
I’m pretty sure we could use that stuff to dramatically improve 2D
performance. We’d just need to look at how to do it. This would
require client-side video drivers library, etc…
Yes, it would be neat to take advantage of DRI. That looks like the stable
long-term solution. (Actually, as a short-term hack, you could build on
Utah-GLX… It already has AGP and direct rendering; just export some 2D
Personally I have the ambition to write a whole windowing system someday
from the ground up - forget X entirely and use direct hardware OpenGL for
Thanks for your comments,