Hello,
I am interested in developing a real-time graphics
application that will update the contents
of the framebuffer every 10 msec.
It is absolutely mandatory that this refresh rate is
kept steady with absolutely no frames dropped.
I was thinking of using SDL under RTLinux.
Has anyone tried this?
Not that I know of. It would require SDL to be ported into RTL (in kernel
space), along with any resources needed, such as the video driver you
need.
You could try some of the user space real time solutions - some versions
of the Linux/lowlatency patch can give you practically hard real time (ie
no missed deadlines) with a worst case latency in the 0.5 ms range.
http://www.linuxdj.com/audio/lad/resourceslatency.php3
Note that the worst case latency peaks are very rare, and occur only when
you use certain kernel services, such as file systems or the fb console.
I’ve done control prototyping in user space with a 2 kHz regulator cycle
without missing deadlines, so you can actually get better than 0.5 ms
worst case latency on a properly tuned and calm system.
What you need to do to try this out - expect patching and building a
lowlatency kernel - is to switch your rendering thread to the SCHED_FIFO
policy, along with the X server, if you use X.
Running X as a real time thread is a bit risky, though - you should
definitely avoid that on systems running normal desktop environments. The
X server isn’t designed to run as a non-preemptible real time thread, and
can end up freezing the system! Using fbdev or even svgalib might be a
better idea. fbdev + DirectFB perhaps…?
Anyway, no matter what measures you take to ensure that your rendering
code stars once every 10 ms - there’s another major issue with a frame
rate like that: Rendering speed. If you need to touch the whole screen,
it’s just not possible to achieve 100 fps in higher resolutions than
about 320x240 in 24 or 32 bits, or possibly 640x480 in 8 bits - unless
you manage to get a Linux driver to do busmaster DMA transfers from
system RAM. CPU writes to VRAM are simply to slow, regardless of hardware.
If you can render your images with OpenGL, without too many procedural
textures, that might make things easier - but of course, you still need
to ensure that the whole chain of code involved in the rendering, all the
way down to the hardware, runs in real time context.
Is there any other solution to guaranteeing a fixed
framebuffer refresh rate in SDL?
No. SDL doesn’t support deeper buffering than double buffering, so you
can’t really get SDL or your application out of the timing critical loop
- and there are no real time video drivers for Linux anyway, so you need
to do some hacking regardless. (See below.)
Any pointers would be greatly appreciated.
Well, there might be another way. If all you need is a fixed output frame
rate, you could hack the video driver to implement deeper buffering
than two pages. That is, every time SDL “pumps” a new frame to the
driver, the driver actually just puts it in a queue, and then flips to
the next free page in a circular chain of pages.
The actual pageflipping (setting the video card up to display one page
every frame) could be done by an RTL or RTAI hack, which is how you’d
guarantee accurate timing. (In the driver, you could replace the
pageflipping code with some code that sends “commands” to an RTL/RTAI
retrace ISR, or if there’s no retrace IRQ, a periodic thread that
synchronises by starting to poll the retrace bit right before the
expected retrace.)
The point is that the buffering (the circular chain of pages) in between
the application and the actual, retrace sync’ed pageflipping will cut you
some slack. It’s ok for the application to stall ocasionally, as long as
the driver never runs out of pages ready for display. (It’s probably
still a very good idea to use the lowlatency patch and SCHED_FIFO for
your application though, as standard Linux can have “hickups” in the
range of 100’s of ms, which may be too much.)
Of course, this adds some delay from application input to visual output.
If this is acceptable, maybe this route could be viable. If not, it’d
better work with a Lowlatency kernel and SCHED_FIFE only, or you’re in
for some serious hacking!
Note: It’s possible to do rock solid audio processing with less than 4 ms
of input->output latency with a decent Linux/Lowlatency kernel - even on
lowly Pentium systems. However, this is much easier than video, as sound
card drivers are basically all in kernel space (well, ALSA has a wrapper
library, but that doesn’t make a difference), with no user space servers
or other stuff. All you need to do is to run your audio thread as
SCHED_FIFO - sound card drivers are generally just as real time capable
as the kernel they run in.
//David Olofson — Programmer, Reologica Instruments AB
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |
-------------------------------------> http://olofson.net -'On Sunday 10 February 2002 20:46, Nicolas Cottaris wrote: