Increasing an SDL application's priority

The point is to allow the game to keep running at a high and
steady frame rate, forcing the procedural texture animation to
slow down when the polygon pumping occasionally gets too heavy.
(Similar to what a caching engine would do if the disk is too
slow to cope with the sector changes; tell the rasterizer to use
smaller mip-map versions until the right ones have been loaded.)

I don’t think this is a good argument for thread priorities. If
the frame rate drops, then set a flag somewhere telling the
procedural texture thread to sleep instead of processing.

If the frame rate drops, you’re too late to fix anything, as the
damage is already done…

Maybe
that thread needs to check the flag after every row generated or
something (if it’s a really CPU intensive calculation). Again,
that should give much more reliable results than hinting the
scheduler.

Yes, but only if you do all processing cooperatively in a single
thread. Who’s going to update the flag while the procedural texture
generator is hogging the only CPU? (Obviously, this is one case where
SMP systems are significantly different.)

Note that there will be no timesharing switches here, as there are at
best around 1.7 “jiffies” per frame on the average OS, so the threads
essentially run until they sleep, or a higher priority thread wakes
up.

If it had the same priority as the rendering and audio
threads, the timesharing scheduler would eventually give it
higher priority, resulting in massive audio drop-outs and
lots of dropped frames.

There’s an easy workaround: have the background threads sleep
most of the time when they get ahead of the main thread.
That’s basically an end-run around the scheduler, but the
results should be much more direct and predictable than doing
it indirectly by mucking around with thread priorities.

It doesn’t work, unless the thread that’s supposed to have higher
priority can live with being starved for “extended” amounts of
time when the “low priority” thread eventually needs to do some
work.

For example, in real time audio applications, audio latency
should preferably be less than 3 ms (no problems with
Linux/lowlatency), whereas graphics usually deals with frame
periods in excess of 10 ms. Without priorities the only way to do
heavy video rendering (ie using more than a fraction of the CPU
power for video) while processing audio would require partial
rendering or similar hacks.

Audio is a lot different than graphics, no argument here. With
audio you really do have a hard real-time constraint, and the
thread that feeds the audio FIFO may indeed need higher priority.
That can be buried in SDL somewhere though, if necessary.

For
graphics I really don’t see the need for application-level thread
priorities.

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you cannot
take a single dropped frame without clearly visible jerking.

Of course, one could just argue that anything that’s sufficiently
"simple" to run full frame rate cannot be multithreaded, or that
constant full frame isn’t possible on computer hardware (a lie) or
whatnot, but the problem doesn’t go away just because it’s ignored.
The question is just if we should stick to single
threaded/cooperative threading engines, require MP machines, or
implement coarse priority control. :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 13 March 2001 03:46, Thatcher Ulrich wrote:

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you cannot
take a single dropped frame without clearly visible jerking.

I don’t think humans can detect more than 30-40 frames per second, so going
beyond that is almost silly. Is it not the case that “smoothness” is
achieved with a constant frame rate? I mean regular TV is 24 (? 30??)
frames per second and it looks pretty smooth.

Full frame rate also probably looks very smooth because it is constant? If
this is the case, this whole conversation is moot: instead of trying to get
60 (or even something crazy like 75 or 90) frames per second, go for 30 and
use the rest of the CPU time for other stuff, chances are most people won’t
notice and your game/demo/app will get to do more per frame!–

Olivier A. Dagenais - Software Architect and Developer

I don’t think humans can detect more than 30-40 frames per second, so going
beyond that is almost silly. Is it not the case that “smoothness” is
achieved with a constant frame rate? I mean regular TV is 24 (? 30??)
frames per second and it looks pretty smooth.

the frame rate required for things to look smooth depends on a lot of
factors (view size, brightness, object/background movement speed etc).
There are also considerable individual perceptual variations

TVs typically show 50 (PAL/SECAM) or 60 (NTSC) half-frames per second,
interlaced. Standard cinema film runs at 48 frames/s (24 distinct frames/s
but each frame is shown twice with shutter blink in between). I think
Omnimax projectors do at least 60 frames/s, which is necessary because
of the much wider and brighter screen

the same applies to computer games — the required frame rate depends
strongly on the game. the only way to find out how much is “enough” is
to try it with your particular game

Maybe
that thread needs to check the flag after every row generated or
something (if it’s a really CPU intensive calculation). Again,
that should give much more reliable results than hinting the
scheduler.

Yes, but only if you do all processing cooperatively in a single
thread. Who’s going to update the flag while the procedural texture
generator is hogging the only CPU? (Obviously, this is one case where
SMP systems are significantly different.)

It gets updated at the next slice. Or, it’s a watchdog: the rendering
thread increments a threshold timer when its done with a frame. The
background thread could compare the current time with the threshold
timer, and sleep when the current time exceeds the threshold time.
True, this is a bunch of work and it’s akin to single threading,
except that the threads have their own stacks and can run on different
CPUs in an SMP. But I don’t think thread priorities solve this
problem any better.

Note that there will be no timesharing switches here, as there are at
best around 1.7 “jiffies” per frame on the average OS, so the threads
essentially run until they sleep, or a higher priority thread wakes
up.

So, we’re talking about graphics here: what event will cause the main
rendering thread to wake up? Doesn’t the scheduler wait until the
next jiffy to switch threads anyway? I could be missing something –
please clue me in.

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you
cannot take a single dropped frame without clearly visible jerking.

Of course, one could just argue that anything that’s sufficiently
"simple" to run full frame rate cannot be multithreaded, or that
constant full frame isn’t possible on computer hardware (a lie) or
whatnot, but the problem doesn’t go away just because it’s ignored.
The question is just if we should stick to single
threaded/cooperative threading engines, require MP machines, or
implement coarse priority control. :slight_smile:

OK, I think I need an explanation of how priority control makes it
possible to get perfect full frame rate animation, while still
allowing real work to get done in the background. I admit this is not
something I’ve looked into heavily, so maybe I just don’t understand
the approach.On Mar 13, 2001 at 05:08 +0100, David Olofson wrote:

On Tuesday 13 March 2001 03:46, Thatcher Ulrich wrote:


Thatcher Ulrich <@Thatcher_Ulrich>
== Soul Ride – pure snowboarding for the PC – http://soulride.com
== real-world terrain – physics-based gameplay – no view limits

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you cannot
take a single dropped frame without clearly visible jerking.

I don’t think humans can detect more than 30-40 frames per second, so going
beyond that is almost silly.

Wrong question answered. The problem is not the fram rate in itself, but the
fact that CRT technology cannot produce lower refresh rates than some 50 Hz.
For high intensity displays like computer monitors, you have to go higher
than 70 Hz to avoid visible flickering.

Is it not the case that “smoothness” is
achieved with a constant frame rate?

Yes and no; Indeed you need a constant frame rate, but you also need to make
sure that every refresh renders an updated version of the scene on the CRT,
or you’ll see ghosting effects.

Also, at refresh rates below some 100 Hz, the eye can easily tell 100 Hz
"double flash" scrolling (ie 50 fps on a 100 Hz display) from 50 Hz full
frame rate scrolling. The former “vibrates” slightly, while the latter
appears absolutely smooth - although flickering, if displayed on anything
like a computer monitor, as opposed to a video monitor. (Video monitors are
designed for 50-60 Hz, and have lower intensity + slower phosphor to deal
with the “insufficient” refresh rates.)

I mean regular TV is 24 (? 30??)

PAL: 25 interlaced = 50 half-frames/second
NTSC: 30 interlaced = 60 half-frames/second

Interlaced modes are different, and come with a bunch of other problems. If
find interlaced modes useless for fast animation with current hardware, as it
requires accurate motion blur to avoid the classical interlace ghosting
effect.

(Note: The game console video modes that most console games use only use the
first half-frame, thus producing a 50/60 Hz mode at half the vertical
resolution.)

frames per second and it looks pretty smooth.

Yes, as long as you’re not looking at NTSC->PAL conversions, at least. :slight_smile:
However, keep in mind that there is motion blur on virtually everything you
see the standard (interlaced) TV format. Games are virtually never motion
blurred, and probably won’t be for quite some time, due to the power required
to do it right.

Full frame rate also probably looks very smooth because it is constant?

Yes, but a constant frame rate of 60 fps on a 60 Hz display still looks
smoother than 60 fps on a 120 Hz display.

If
this is the case, this whole conversation is moot: instead of trying to
get 60 (or even something crazy like 75 or 90) frames per second, go for 30
and use the rest of the CPU time for other stuff, chances are most people
won’t notice and your game/demo/app will get to do more per frame!

Sorry, but it doesn’t work that way, unfortunately.

(It might be me having extremely fast eyes, but I don’t find that very
likely. I have blue eyes, while brown and black eyes are significantly faster
according to some scientific studies. :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 13 March 2001 05:33, Olivier Dagenais wrote:

PLEASE take this discussion offline.

Thanks,
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

David Olofson wrote:

Wrong question answered. The problem is not the fram rate in itself, but the
fact that CRT technology cannot produce lower refresh rates than some 50 Hz.
For high intensity displays like computer monitors, you have to go higher
than 70 Hz to avoid visible flickering.

In that case, wouldn’t it be possible to determine the refresh rate of
the monitor, then set the frame rate to a fraction of this rate? E.g.
if the refresh rate was 70 Hz, then couldn’t one do 35 FPS by only
updating the screen on every other refresh? By only updating the
display on refresh, wouldn’t this avoid flickering?

I’n new to SDL, but i remeber doing something similar almost 10 years
ago when wrote some graphical applications for DOS.

//David

-Karl

Maybe
that thread needs to check the flag after every row generated or
something (if it’s a really CPU intensive calculation). Again,
that should give much more reliable results than hinting the
scheduler.

Yes, but only if you do all processing cooperatively in a single
thread. Who’s going to update the flag while the procedural texture
generator is hogging the only CPU? (Obviously, this is one case where
SMP systems are significantly different.)

It gets updated at the next slice. Or, it’s a watchdog: the rendering
thread increments a threshold timer when its done with a frame. The
background thread could compare the current time with the threshold
timer, and sleep when the current time exceeds the threshold time.

True, this is a bunch of work and it’s akin to single threading,
except that the threads have their own stacks and can run on different
CPUs in an SMP.

Good point; provided it works on UP, it could sometimes improve the
performance on (S)MP.

But I don’t think thread priorities solve this problem any better.

Lowering the priority of the background thread allows it to run "continously"
without explicitly blocking, as the scheduler will automatically preempt it
when a higher priority thread becomes runnable.

Note that some schedulers can actually figure out for themselves which
threads should run as “background” threads, but it may take a while to adjust
(it’s usually based on CPU time metering and dynamic priorities), and isn’t
very reliable. An explicit thread priority sorts this out right away,
avoiding temporary “non standard” load conditions and similar stuff screwing
up the scheduling.

Note that there will be no timesharing switches here, as there are at
best around 1.7 “jiffies” per frame on the average OS, so the threads
essentially run until they sleep, or a higher priority thread wakes
up.

So, we’re talking about graphics here: what event will cause the main
rendering thread to wake up?

Preferably unblocking after syncing on the “flip” command to the GPU, or
direct retrace sync, but indeed - and this is a serious problem; many targets
cannot do retrace sync without busy-waiting. (Well, they can if a usable
timer feature is available, but it seems that I’m the only person working on
such a solution that doesn’t require something like RTLinux or RTAI.)

Doesn’t the scheduler wait until the
next jiffy to switch threads anyway? I could be missing something –
please clue me in.

All kernels I know any details about use the “jiffies” only for
timesharing. Rescheduling is also done indirectly by drivers every time they
unblock, causing newly woken up threads to run right away, provided they have
the highest priority at the time.

In has to be done that way, or the kernel idle thread would be switched in
as soon as all threads have gone to sleep (blocking on I/O normally) for the
"jiffy period", and nothing would be done before the next jiffy.

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you
cannot take a single dropped frame without clearly visible jerking.

Of course, one could just argue that anything that’s sufficiently
"simple" to run full frame rate cannot be multithreaded, or that
constant full frame isn’t possible on computer hardware (a lie) or
whatnot, but the problem doesn’t go away just because it’s ignored.
The question is just if we should stick to single
threaded/cooperative threading engines, require MP machines, or
implement coarse priority control. :slight_smile:

OK, I think I need an explanation of how priority control makes it
possible to get perfect full frame rate animation, while still
allowing real work to get done in the background. I admit this is not
something I’ve looked into heavily, so maybe I just don’t understand
the approach.

Well, let’s just return to the good old days, where virtually any machine had
at least a retrace interrupt. The background thread would be the main
program, while the rendering thread would be in the retrace ISR. Every time
there is a retrace interrupt, the rendering thread starts rendering a new
frame, and keeps working until it’s done, entirely blocking the “background
thread”.

The good part is that the background thread can just go on with it’s work
without worrying about adjusting to the available CPU power, or the rendering
timing - the rendering “thread” will just take the CPU time it needs whenever
it needs it, as it has higher priority.

Now, threading in a time shared system isn’t as rigid as that, but it’s not
far from it, as the rendering thread blocks frequently enough not to annoy
the timesharing system significantly. (This may of course be untrue for some
more or less broken “operating systems”…)

(Note: With SCHED_FIFO on Linux, it actually is just as rigid as a directly
interrupt driven system; if a high priority SCHED_FIFO thread refuses to
block, the system will freeze entirely - only the thread and the kernel
drivers will run.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 13 March 2001 16:06, Thatcher Ulrich wrote:

On Mar 13, 2001 at 05:08 +0100, David Olofson wrote:

On Tuesday 13 March 2001 03:46, Thatcher Ulrich wrote:

This is kind of correct, but the variance is much larger than that, I
think I read something like 40-80. Also, there is a distinct difference
between a computer game and a tv show or movie, you have control of a
video game. That distinction means that a higher frame rate is necessary
for a video game to seem smooth. 60 fps is a fairly common goal since
that is the cut off point where something like 90% of people will see it
as smooth. This is of course in addition to all the other factors whoever
it was who gave the discussion on fps versus monitor refresh rate (sorry I
forgot your name as I was writing this).

Andrew Gerling> On Tuesday 13 March 2001 05:33, Olivier Dagenais wrote:

It’s virtually impossible to implement perfect full frame rate
animation with more than one thread without priorities, as you cannot
take a single dropped frame without clearly visible jerking.

I don’t think humans can detect more than 30-40 frames per second, so going
beyond that is almost silly.