SDL audiospec thread priority should be higher than the main app thread

I am finding that if I do not use a hack to increase the audiospec callback thread’s priority, my application’s main loop preempts it constantly, causing the sound to skip and stutter. The only other alternative would be to force the main thread to sleep but this causes an unacceptable loss of over 20 frames per second even when requesting to sleep for the minimum possible time slice.

We use SDL_mixer so in order to test my idea I threw this into my post-mix callback:

#if _MSC_VER >= 1500
static bool setpriority = false;

if(!setpriority)
{
setpriority = true;
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_ABOVE_NORMAL);
}
#endif

All the stuttering is suddenly alleviated. I find it a little bit odd that SDL doesn’t already do this itself when creating a sound thread, and in a way that would be cross-platform. I have no inkling of how to set thread priorities on Linux, for example, but my program is cross-platform and would therefore benefit from having a proper priority set on a critical thread like the audiospec callback thread. Is there any chance this could be adapted into the library itself?

James Haley

I’ve used SDL_Mixer and did not have to do this. I think that playing
with the thread priority might be treating the symptom and not the
cause. Maybe if you play around with settings you pass to SDL_Mixer
instead?

For example, the documentation* for Mix_OpenAudio states:

" SDL must be initialized with SDL_INIT_AUDIO before this call.
frequency would be 44100 for 44.1KHz, which is CD audio rate. Most
games use 22050, because 44100 requires too much CPU power on older
computers. chunksize is the size of each mixed sample. The smaller
this is the more your hooks will be called. If make this too small on
a slow system, sound may skip. If made to large, sound effects will
lag behind the action more. You want a happy medium for your target
computer. "

That might help. If not, can you give us more information on how you
are using it?

I am finding that if I do not use a hack to increase the audiospec callback
thread’s priority, my application’s main loop preempts it constantly,
causing the sound to skip and stutter.

[…]

Actually, it’s the other way around; tweaking the settings (buffer size, more
specifically) is treating the symptom rather than the cause.

You can get away with incorrect thread priorities if you increase the buffer
size enough that audio output deadlines can be met even when the audio thread
is preempted. This is avoiding the problem (by increasing the latency, thereby
relaxing the requirements) instead of solving it.

BTW, the reason why this isn’t always a problem is that in general, the main
thread of a game behaves in ways that have the operating system consider it a
CPU hog, giving it a lower dynamic priority than the audio thread.On Tuesday 19 April 2011, at 11.02.43, Brian Barrett <brian.ripoff at gmail.com> wrote:

I’ve used SDL_Mixer and did not have to do this. I think that playing
with the thread priority might be treating the symptom and not the
cause. Maybe if you play around with settings you pass to SDL_Mixer
instead?

//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.— Games, examples, libraries, scripting, sound, music, graphics —.
| http://consulting.olofson.net http://olofsonarcade.com |
’---------------------------------------------------------------------’

I defer to your more thorough understanding of the lower level sound API =]

I was answering from my own (limited) experience of using the API, and
of course the documentation.

– Brian>

Actually, it’s the other way around; tweaking the settings (buffer size, more
specifically) is treating the symptom rather than the cause.

SDL 1.3 added a thread priority API and now does this. :)On Mon, Apr 18, 2011 at 9:56 PM, James Haley wrote:

I am finding that if I do not use a hack to increase the audiospec
callback thread’s priority, my application’s main loop preempts it
constantly, causing the sound to skip and stutter. The only other
alternative would be to force the main thread to sleep but this causes an
unacceptable loss of over 20 frames per second even when requesting to sleep
for the minimum possible time slice.

We use SDL_mixer so in order to test my idea I threw this into my post-mix
callback:

#if _MSC_VER >= 1500
static bool setpriority = false;

if(!setpriority)
{
setpriority = true;
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_ABOVE_NORMAL);
}
#endif

All the stuttering is suddenly alleviated. I find it a little bit odd that
SDL doesn’t already do this itself when creating a sound thread, and in a
way that would be cross-platform. I have no inkling of how to set thread
priorities on Linux, for example, but my program is cross-platform and would
therefore benefit from having a proper priority set on a critical thread
like the audiospec callback thread. Is there any chance this could be
adapted into the library itself?

James Haley


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

What’s the status on 1.3? Is it stable enough to be used in serious apps now? Is the range of supported platforms the same? I’ve been out of the loop on it pretty much from the beginning. We’re currently using 1.2.14 and the newest available version of the satellite libraries (SDL_mixer and SDL_net).

-JamesDate: Tue, 19 Apr 2011 05:40:43 -0700
From: slouken@libsdl.org
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the main app thread

SDL 1.3 added a thread priority API and now does this. :slight_smile:

On Mon, Apr 18, 2011 at 9:56 PM, James Haley <@James_Haley> wrote:

I am finding that if I do not use a hack to increase the audiospec callback thread’s priority, my application’s main loop preempts it constantly, causing the sound to skip and stutter. The only other alternative would be to force the main thread to sleep but this causes an unacceptable loss of over 20 frames per second even when requesting to sleep for the minimum possible time slice.

We use SDL_mixer so in order to test my idea I threw this into my post-mix callback:

#if _MSC_VER >= 1500
static bool setpriority = false;

if(!setpriority)
{
setpriority = true;

  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_ABOVE_NORMAL);

}
#endif

All the stuttering is suddenly alleviated. I find it a little bit odd that SDL doesn’t already do this itself when creating a sound thread, and in a way that would be cross-platform. I have no inkling of how to set thread priorities on Linux, for example, but my program is cross-platform and would therefore benefit from having a proper priority set on a critical thread like the audiospec callback thread. Is there any chance this could be adapted into the library itself?

James Haley


SDL mailing list

SDL at lists.libsdl.org

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I already tried bumping up the sample buffer size. At 4096 the latency is as high as it can get - any higher introduces unacceptable lag, and at this sample buffer size I am still getting stuttering sometimes.

As for the sample rate, we must use a sample rate of 44100 in this app due to demands on the music system.

-James> Date: Tue, 19 Apr 2011 10:02:43 +0100

From: brian.ripoff at gmail.com
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the main app thread

I’ve used SDL_Mixer and did not have to do this. I think that playing
with the thread priority might be treating the symptom and not the
cause. Maybe if you play around with settings you pass to SDL_Mixer
instead?

For example, the documentation* for Mix_OpenAudio states:

" SDL must be initialized with SDL_INIT_AUDIO before this call.
frequency would be 44100 for 44.1KHz, which is CD audio rate. Most
games use 22050, because 44100 requires too much CPU power on older
computers. chunksize is the size of each mixed sample. The smaller
this is the more your hooks will be called. If make this too small on
a slow system, sound may skip. If made to large, sound effects will
lag behind the action more. You want a happy medium for your target
computer. "

That might help. If not, can you give us more information on how you
are using it?

On 19 April 2011 05:56, James Haley <@James_Haley> wrote:

I am finding that if I do not use a hack to increase the audiospec callback
thread’s priority, my application’s main loop preempts it constantly,
causing the sound to skip and stutter.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

What’s the status on 1.3? Is it stable enough to be used in serious apps
now? Is the range of supported platforms the same? I’ve been out of the loop
on it pretty much from the beginning. We’re currently using 1.2.14 and the
newest available version of the satellite libraries (SDL_mixer and SDL_net).

-James

SDL 1.3 is not available on any distro(eg, debian, macports, freebsd etc),
since it hasn’t been released yet. When it’s going to be released I’m not
sure.

I guess you could always take the 1.3 thread priority code out and into your
app? Unfortunately I guess that would increase your maintenance a little.

Have you added a sleep in your main thread? Even if it’s only for a little
time such as 10ms, that can help. However probably not enough.

cya!On Tue, Apr 19, 2011 at 6:07 PM, James Haley wrote:

Be very careful about setting thread priorities. They behave very
differently depending on the OS and architecture you are running on.

I semi-recently wrote this:
http://playcontrol.net/ewing/jibberjabber/pathological-sleep-disorder.html

Since then, I’ve noticed I’ve had similar, but slightly different
issues on Androids. Some people I’ve talked to suggest this might be
something intrinsic to the ARM design.

My advice is thread priorities should only be set on a per OS/arch
basis as you discover you need it. Don’t mess with priorities of other
systems you don’t know about.

-EricOn 4/18/11, James Haley wrote:

I am finding that if I do not use a hack to increase the audiospec callback
thread’s priority, my application’s main loop preempts it constantly,
causing the sound to skip and stutter. The only other alternative would be
to force the main thread to sleep but this causes an unacceptable loss of
over 20 frames per second even when requesting to sleep for the minimum
possible time slice.

We use SDL_mixer so in order to test my idea I threw this into my post-mix
callback:

#if _MSC_VER >= 1500
static bool setpriority = false;

if(!setpriority)
{
setpriority = true;
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_ABOVE_NORMAL);
}
#endif

All the stuttering is suddenly alleviated. I find it a little bit odd that
SDL doesn’t already do this itself when creating a sound thread, and in a
way that would be cross-platform. I have no inkling of how to set thread
priorities on Linux, for example, but my program is cross-platform and would
therefore benefit from having a proper priority set on a critical thread
like the audiospec callback thread. Is there any chance this could be
adapted into the library itself?

James Haley


Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/

I tried it, here (from http://mancubus.net/svn/hosted/eternity/trunk/source/d_net.cpp ln 775):

// haleyjd 09/07/10: enhanced d_fastrefresh w/early return when no tics to run
if(counts <= 0 && d_fastrefresh && !timingdemo) // 10/03/10: not in timedemos!
return true;

Added an I_Sleep(1) call to that if statement, which calls down to SDL_Delay() with the same parameter. That ought to sleep for the minimum possible time slice, and doing so does seem to alleviate sound stuttering, but at the cost of a large hit to the maximum FPS. On a processor-intensive level (see http://eternity.mancubus.net/pics/sunder/sunder12-1.png for an idea on just how intensive) the max FPS drops from 220 to 195 with that I_Sleep(1) call in place. That’s a pretty big hit, especially considering that the thread priority tweak entails almost no loss to framerate. But maybe I’m just being silly about the whole thing… 195 is still a good clip.

When the d_fastrefresh variable is enabled by the end-user, that tweak there causes the game to return to the main while(1) loop as quickly as possible. This seems for some reason to manage making the program run in a very tight loop rendering to the screen constantly but without suffering a penalty for hogging the CPU. Without d_fastrefresh enabled, the loop is limited to 35 FPS (vanilla DOOM framerate).

-JamesDate: Tue, 19 Apr 2011 18:20:25 +0100
From: renesd@gmail.com
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the main app thread

On Tue, Apr 19, 2011 at 6:07 PM, James Haley <@James_Haley> wrote:

What’s the status on 1.3? Is it stable enough to be used in serious apps now? Is the range of supported platforms the same? I’ve been out of the loop on it pretty much from the beginning. We’re currently using 1.2.14 and the newest available version of the satellite libraries (SDL_mixer and SDL_net).

-James

SDL 1.3 is not available on any distro(eg, debian, macports, freebsd etc), since it hasn’t been released yet. When it’s going to be released I’m not sure.

I guess you could always take the 1.3 thread priority code out and into your app? Unfortunately I guess that would increase your maintenance a little.

Have you added a sleep in your main thread? Even if it’s only for a little time such as 10ms, that can help. However probably not enough.

cya!


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Yes you make a good point. I don’t have any proof yet that this is actually a problem when running under Linux. All I really have to go off is the behavior I am getting on my Core 2 Quad under Win7 64-bit.

-James> Date: Tue, 19 Apr 2011 12:00:25 -0700

From: ewmailing at gmail.com
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the main app thread

On 4/18/11, James Haley <@James_Haley> wrote:

I am finding that if I do not use a hack to increase the audiospec callback
thread’s priority, my application’s main loop preempts it constantly,
causing the sound to skip and stutter. The only other alternative would be
to force the main thread to sleep but this causes an unacceptable loss of
over 20 frames per second even when requesting to sleep for the minimum
possible time slice.

We use SDL_mixer so in order to test my idea I threw this into my post-mix
callback:

#if _MSC_VER >= 1500
static bool setpriority = false;

if(!setpriority)
{
setpriority = true;
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_ABOVE_NORMAL);
}
#endif

All the stuttering is suddenly alleviated. I find it a little bit odd that
SDL doesn’t already do this itself when creating a sound thread, and in a
way that would be cross-platform. I have no inkling of how to set thread
priorities on Linux, for example, but my program is cross-platform and would
therefore benefit from having a proper priority set on a critical thread
like the audiospec callback thread. Is there any chance this could be
adapted into the library itself?

James Haley

Be very careful about setting thread priorities. They behave very
differently depending on the OS and architecture you are running on.

I semi-recently wrote this:
http://playcontrol.net/ewing/jibberjabber/pathological-sleep-disorder.html

Since then, I’ve noticed I’ve had similar, but slightly different
issues on Androids. Some people I’ve talked to suggest this might be
something intrinsic to the ARM design.

My advice is thread priorities should only be set on a per OS/arch
basis as you discover you need it. Don’t mess with priorities of other
systems you don’t know about.

-Eric

Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Audio stuttering on iOS, okay, I get that, it has a fairly limited CPU
compared to desktop machines, but stuttering on a Core 2 Quad in a
Doom-based game? Now, I’m just curious and want to go hack SDL’s audio code.

I tried it, here (from
http://mancubus.net/svn/hosted/eternity/trunk/source/d_net.cpp ln 775):

// haleyjd 09/07/10: enhanced d_fastrefresh w/early return when no tics
to run
if(counts <= 0 && d_fastrefresh && !timingdemo) // 10/03/10: not in
timedemos!
return true;

Added an I_Sleep(1) call to that if statement, which calls down to
SDL_Delay() with the same parameter. That ought to sleep for the minimum
possible time slice,

SDL_Delay(1) means sleep at least 1 millisecond.
(linkhttp://www.libsdl.org/docs/html/sdldelay.html).
On Win32, it is implemented using a direct call to Sleep() as seen here (
link http://msdn.microsoft.com/en-us/library/ms686298(v=vs.85).aspx). The
documentation reads: “If dwMilliseconds is less than the resolution of the
system clock, the thread may sleep for less than the specified length of
time. IfdwMilliseconds is greater than one tick but less than two, the
wait can be anywhere between one and two ticks, and so on” According to that
definition, it may not actually sleep as long as the parameter implies.
Maybe this is a failing of SDL, though in general I’d SDL_Delay() in a main
loop is a failure in itself. I’ll check out the audio code right now to see
if there is anything obviously wrong. For now, instead of calling
SetThreadPriority() try do this in place of the sleep call:

NewSleep() {

if(SwitchToThread() == FALSE)
Sleep(1);

}

The function SwitchToThread()
(linkhttp://msdn.microsoft.com/en-us/library/ms686352(VS.85).aspx)
yields the CPU to any threads ready to run on the processor. If your audio
thread is truly being starved for lack of CPU time, this (instead of sleep)
will schedule another thread to run – though in theory, this shouldn’t even
be happening. Let me know how/if that works. It is Win32-specific so clearly
not a viable solution for portability, but it does help shed light on what
really the problem is.

and doing so does seem to alleviate sound stuttering, but at the cost of a

large hit to the maximum FPS. On a processor-intensive level (see
http://eternity.mancubus.net/pics/sunder/sunder12-1.png for an idea on
just how intensive) the max FPS drops from 220 to 195 with that I_Sleep(1)
call in place. That’s a pretty big hit, especially considering that the
thread priority tweak entails almost no loss to framerate. But maybe I’m
just being silly about the whole thing… 195 is still a good clip.

FPS = frames per second. So then inverting that (reciprocal) is the seconds
per frame (i.e. time it takes to render a frame).

1/220 = 0.0045s, or ~4.5ms
1/195 = 0.0051s, or ~5.1ms

You scene takes an additional 600 microseconds to render. I wouldn’t say it
took a large hit, more like “falls within the margin of reasonable error”.On Tue, Apr 19, 2011 at 9:18 PM, James Haley wrote:

When the d_fastrefresh variable is enabled by the end-user, that tweak
there causes the game to return to the main while(1) loop as quickly as
possible. This seems for some reason to manage making the program run in a
very tight loop rendering to the screen constantly but without suffering a
penalty for hogging the CPU. Without d_fastrefresh enabled, the loop is
limited to 35 FPS (vanilla DOOM framerate).

-James


Date: Tue, 19 Apr 2011 18:20:25 +0100
From: renesd at gmail.com

To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the
main app thread

On Tue, Apr 19, 2011 at 6:07 PM, James Haley wrote:

What’s the status on 1.3? Is it stable enough to be used in serious apps
now? Is the range of supported platforms the same? I’ve been out of the loop
on it pretty much from the beginning. We’re currently using 1.2.14 and the
newest available version of the satellite libraries (SDL_mixer and SDL_net).

-James

SDL 1.3 is not available on any distro(eg, debian, macports, freebsd etc),
since it hasn’t been released yet. When it’s going to be released I’m not
sure.

I guess you could always take the 1.3 thread priority code out and into
your app? Unfortunately I guess that would increase your maintenance a
little.

Have you added a sleep in your main thread? Even if it’s only for a little
time such as 10ms, that can help. However probably not enough.

cya!

_______________________________________________ SDL mailing list
SDL at lists.libsdl.org http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Honestly, if you’re above 24fps, there shouldn’t be any visual stutter
at all, at least from the human eye perspective. From the software
perspective, I don’t think it’s a big deal, either. Frame rates vary,
and some of it is because of a sleep in there (which IMHO is good),
but it can also be fixed by optimizing some routines that you are
using.
Take care,
-AlexOn Wed, Apr 20, 2011 at 12:09 AM, Patrick Baggett <baggett.patrick at gmail.com> wrote:

Audio stuttering on iOS, okay, I get that, it has a fairly limited CPU
compared to desktop machines, but stuttering on a Core 2 Quad in a
Doom-based game? Now, I’m just curious and want to go hack SDL’s audio code.

On Tue, Apr 19, 2011 at 9:18 PM, James Haley wrote:

I tried it, here (from
http://mancubus.net/svn/hosted/eternity/trunk/source/d_net.cpp ln 775):

?? // haleyjd 09/07/10: enhanced d_fastrefresh w/early return when no tics
to run
?? if(counts <= 0 && d_fastrefresh && !timingdemo) // 10/03/10: not in
timedemos!
??? return true;

Added an I_Sleep(1) call to that if statement, which calls down to
SDL_Delay() with the same parameter. That ought to sleep for the minimum
possible time slice,

SDL_Delay(1) means sleep at least 1 millisecond. (link). On Win32, it is
implemented using a direct call to Sleep() as seen here (link). The
documentation reads: “If?dwMilliseconds?is less than the resolution of the
system clock, the thread may sleep for less than the specified length of
time. IfdwMilliseconds?is greater than one tick but less than two, the wait
can be anywhere between one and two ticks, and so on” According to that
definition, it may not actually sleep as long as the parameter implies.
Maybe this is a failing of SDL, though in general I’d SDL_Delay() in a main
loop is a failure in itself. I’ll check out the audio code right now to see
if there is anything obviously wrong. For now, instead of calling
SetThreadPriority() try do this in place of the sleep call:
NewSleep() {
if(SwitchToThread() == FALSE)
?? ?Sleep(1);
}
The function SwitchToThread() (link) yields the CPU to any threads ready to
run on the processor. If your audio thread is truly being starved for lack
of CPU time, this (instead of sleep) will schedule another thread to run –
though in theory, this shouldn’t even be happening. Let me know how/if that
works. It is Win32-specific so clearly not a viable solution for
portability, but it does help shed light on what really the problem is.

and doing so does seem to alleviate sound stuttering, but at the cost of a
large hit to the maximum FPS. On a processor-intensive level (see
http://eternity.mancubus.net/pics/sunder/sunder12-1.png for an idea on just
how intensive) the max FPS drops from 220 to 195 with that I_Sleep(1) call
in place. That’s a pretty big hit, especially considering that the thread
priority tweak entails almost no loss to framerate. But maybe I’m just being
silly about the whole thing… 195 is still a good clip.

FPS = frames per second. So then inverting that (reciprocal) is the seconds
per frame (i.e. time it takes to render a frame).
1/220 =?0.0045s, or ~4.5ms
1/195 = 0.0051s, or ~5.1ms
You scene takes an additional 600 microseconds to render. I wouldn’t say it
took a large hit, more like “falls within the margin of reasonable error”.

When the d_fastrefresh variable is enabled by the end-user, that tweak
there causes the game to return to the main while(1) loop as quickly as
possible. This seems for some reason to manage making the program run in a
very tight loop rendering to the screen constantly but without suffering a
penalty for hogging the CPU. Without d_fastrefresh enabled, the loop is
limited to 35 FPS (vanilla DOOM framerate).

-James


Date: Tue, 19 Apr 2011 18:20:25 +0100
From: renesd at gmail.com
To: sdl at lists.libsdl.org
Subject: Re: [SDL] SDL audiospec thread priority should be higher than the
main app thread

On Tue, Apr 19, 2011 at 6:07 PM, James Haley wrote:

What’s the status on 1.3? Is it stable enough to be used in serious apps
now? Is the range of supported platforms the same? I’ve been out of the loop
on it pretty much from the beginning. We’re currently using 1.2.14 and the
newest available version of the satellite libraries (SDL_mixer and SDL_net).

-James

SDL 1.3 is not available on any distro(eg, debian, macports, freebsd etc),
since it hasn’t been released yet.? When it’s going to be released I’m not
sure.

I guess you could always take the 1.3 thread priority code out and into
your app?? Unfortunately I guess that would increase your maintenance a
little.

Have you added a sleep in your main thread?? Even if it’s only for a
little time such as 10ms, that can help.? However probably not enough.

cya!

_______________________________________________ SDL mailing list
SDL at lists.libsdl.org http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Last I heard, the human eye works at around 60 Hz. The reason 24 or
30 Hz works on CRTs and television is because every second scanline is
drawn in a separate pass, which yields an effective framerate of 48 or
60 Hz. Persistence of vision and phosphor persistence also tend to
blend frames together to let you get away with effective framerates
lower than 60Hz.

Some stereoscopic systems require a minimum framerate of 120 Hz for
the monitor (60Hz per eye). I assume it’s the same for the software.

Anyway, the rate at which you can show information to the user is
limited by the monitor’s refresh rate. If your monitor operates at 60
Hz and you’re running at 120 Hz, you’re wasting every second frame.
You could also end up with a tearing effect, if your video card
doesn’t hide that for you.On 20 April 2011 10:02, Alex Barry <alex.barry at gmail.com> wrote:

Honestly, if you’re above 24fps, there shouldn’t be any visual stutter
at all, at least from the human eye perspective. ?From the software
perspective, I don’t think it’s a big deal, either.

There are a lot of misconceptions about frame rates and human vision.
The first confusion is between smoothness of motion and lack of
flicker. When you see the 60 Hz figure it’s talking about flicker
(assuming the source you’re reading isn’t suffering from the same
misconceptions).

Generally, humans perceive pulses as continuous light when those
pulses are coming at 60 Hz. The center of our visual field is less
sensitive to flicker which is why you might notice fluorescent lights
flickering out of the corner of your eye.

Movies are usually filmed at 24 fps, but projectors will send two
flashes of light through each frame to reduce apparent flicker. You
still see some flicker on the big screen because you’re only reaching
48 Hz.

Flicker is an entirely separate issue from the apparent smoothness of
motion, and is pretty much irrelevant to game developers. Only display
manufacturers have to worry about flicker.

We DO have to worry about the smoothness of motion, though. This is
affected by several factors:

  • Overall frame rate - All else being equal, higher frame rate
    translates to smoother motion. However, the negative effects of the
    other factors mean you really can’t make any meaningful statement in
    isolation about what the right frame rate should be.

  • Frame rate consistency - Even at high overall rates, if it’s
    constantly varying things will look noticeably choppy. 24 fps with
    vsync will look smoother than 75fps without vsync simply because there
    is much less variation in time between frames.

For example, if a ball is moving across the screen in 1 second, it
will look fairly smooth if the distance it travels each frame is
1/24th of the screen width. However, if it moves 1/75th the distance
in one frame, 1/40th the distance in another, 1/60th the distance in
another, and so on, you will see noticeable stutter.

  • The speed at which objects move across the visual field - (NOTE:
    this depends on the size of the display and the distance to the
    viewer.) You could get away with 1 FPS if nothing on your screen moves
    faster than a couple pixels per second. By contrast, even 120 FPS
    might not be sufficient to reduce strobing effects if you have objects
    moving around the screen extremely fast a lot of the time.

  • Motion blur - As a computer graphics technique, it’s a way to
    compensate for both of the latter smoothness factors. If a ball moves
    from one side of the screen to the other in one frame, motion blur
    fools our brains into thinking it traversed the distance. As long as
    the degree of blur depends on the time between frames, it can also
    compensate somewhat for an inconsistent frame rate.

So yeah… it’s a bit more complicated than even reputable sources
might suggest.

Do with this information as you please :)On Wed, Apr 20, 2011 at 7:42 AM, Kenneth Bull wrote:

On 20 April 2011 10:02, Alex Barry <alex.barry at gmail.com> wrote:

Honestly, if you’re above 24fps, there shouldn’t be any visual stutter
at all, at least from the human eye perspective. ?From the software
perspective, I don’t think it’s a big deal, either.

Last I heard, the human eye works at around 60 Hz. ?The reason 24 or
30 Hz works on CRTs and television is because every second scanline is
drawn in a separate pass, which yields an effective framerate of 48 or
60 Hz. ?Persistence of vision and phosphor persistence also tend to
blend frames together to let you get away with effective framerates
lower than 60Hz.


Matthew Orlando
http://cogwheel.info

A few of my own subjective observations as I have access to a few somewhat exotic displays and a history in the Quake competitive multiplayer scene…

Tearing is most apparent when your redraw rate is near the refresh rate - if it is substantially higher (3x or higher) or substantially lower (less than half), it’s not particularly noticeable.

Film cameras rely on a series of “snapshots”, distinct complete frames, which correlate rather well with LCD/Plasma/DLP display technologies, this is effectively a “strobe image” characteristic
(blasting the entire frame out at once, or at least causing it to morph over time into the new frame all at once in the case of LCD).

As a side note, film projectors use an intentional flicker of the shutter to hide the times when it advances frames vs the times when it does not (each frame is shown with two or more black periods
inserted, one of which is hiding the advancement of the film, while the others exist only for aesthetic reasons).

Old camcorders and many cellphones are known for a “motion skew” effect, because the discharge of the CCD or CMOS pixels is being performed during scanout (the shutter is open at the time), matching
the signal rate being produced, such “motion skew” is unobvious when directly displayed on a similar CRT/LED display at the same refresh rate where the timing of pixels lighting up (being energized)
is identical to the timing of the recording itself, this happens chiefly because these cameras have no shutter (on a highend DSLR you do not get much motion skew because a mechanical shutter is used
when recording video and the scanout is performed “in the dark”).

As a case example, Quake3 is considered best played on a CRT, which is partly to do with its 125fps cap, on an LCD this is far too near the refresh rate and can demonstrate obvious tearing, enabling
vsync fixes this problem on an LCD, but isn’t particularly necessary on a CRT (likely due to the inherent weirdness of the scanout itself), unfortunately turning on vsync causes the player physics to
change and the input to become less accurately timed…

The biggest argument I’ve seen among hardcore Quake players against vsync is the input timing accuracy, it’s not that it makes the game look smoother to render at higher redraw rates than the refresh,
it’s that it feels more responsive, often this is to do with the player physics behaving differently at higher framerates (literally one can make some jumps and maneuvers at a high framerate that one
can not make at a lower framerate, this falls into the category of physics exploits of course), or simply to do with the input gathering being finer grained in such a situation.

As far as smoothness of motion (the observer situation), the higher the refresh rate the better, and vsync should always be on, motion blur can be simulated by superimposing multiple frames
(accumulation buffer / temporal antialiasing), or using advanced screenspace effects based on object motion vectors stored during rendering (Object Space Motion Blur), or simply a ghost of the
previous frame (this is actually an inherent characteristic of an LCD display, so it is quite amusing that developers often apply it as an intentional effect as well).

As far as input latency (the hardcore player situation), the higher the redraw rate the better, and vsync should be off, simply because the games often behave better at extremely high framerates, not
to do with the actual visuals - this characteristic will not appear in screenshots or ingame video capture, so it is hard to quantify objectively.

It is common knowledge that hardcore players refuse to use most 60hz LCD monitors, but many find a 120hz LCD monitor to be quite acceptable, such 120hz LCD monitors (and a few 60hz ones) lack “scaler"
hardware and thus can only run at a single native resolution (direct scanout) as such hardware is associated with higher input latency (2-3 refreshes delay) which is considered unacceptable by
hardcore players.–
LordHavoc
Author of DarkPlaces Quake1 engine - http://icculus.org/twilight/darkplaces
Co-designer of Nexuiz - http://alientrap.org/nexuiz
"War does not prove who is right, it proves who is left.” - Unknown
"Any sufficiently advanced technology is indistinguishable from a rigged demo." - James Klass
"A game is a series of interesting choices." - Sid Meier

A few of my own subjective observations as I have access to a few somewhat
exotic displays and a history in the Quake competitive multiplayer scene…

Tearing is most apparent when your redraw rate is near the refresh rate -
if it is substantially higher (3x or higher) or substantially lower (less
than half), it’s not particularly noticeable.

Yes, exactly. It’s just like waves and producing a “beat” between them where
the high point in the wave represents a synchronized point. If you have a
60Hz display and you update at 50, then you have a bunch of frames that
don’t match the timing. The display updates every 16.67ms, you generate a
frame every 20ms, so if you started at the same time, you’d get:

1/60 2/60 3/60 4/60
o-----o-----o-----o-----o

o------o------o------o------o
1/50 2/50 3/50 4/50

The would get one frame without tearing when n*(1000/50) = m*(1000/60), when
m,n are integers, i.e their least common multiple. In this case, you’d find
it at time = 100ms, or n*(1000/50) = m*(1000/60) = 100; thus n = 5 and m =
6. So every 5th frame drawn by the graphics card is actually synchronized
with the monitor.

The closer the frame rate (e.g. 59 vs 60), the larger they aren’t
synchronized. In that example, their LCM is at time 282.4msec, so about
every 17 frames (only sync’d 4-5 times per second!)

Film cameras rely on a series of “snapshots”, distinct complete frames,
which correlate rather well with LCD/Plasma/DLP display technologies, this
is effectively a “strobe image” characteristic (blasting the entire frame
out at once, or at least causing it to morph over time into the new frame
all at once in the case of LCD).

As a side note, film projectors use an intentional flicker of the shutter
to hide the times when it advances frames vs the times when it does not
(each frame is shown with two or more black periods inserted, one of which
is hiding the advancement of the film, while the others exist only for
aesthetic reasons).

Old camcorders and many cellphones are known for a “motion skew” effect,
because the discharge of the CCD or CMOS pixels is being performed during
scanout (the shutter is open at the time), matching the signal rate being
produced, such “motion skew” is unobvious when directly displayed on a
similar CRT/LED display at the same refresh rate where the timing of pixels
lighting up (being energized) is identical to the timing of the recording
itself, this happens chiefly because these cameras have no shutter (on a
highend DSLR you do not get much motion skew because a mechanical shutter is
used when recording video and the scanout is performed “in the dark”).

As a case example, Quake3 is considered best played on a CRT, which is
partly to do with its 125fps cap, on an LCD this is far too near the refresh
rate and can demonstrate obvious tearing, enabling vsync fixes this problem
on an LCD, but isn’t particularly necessary on a CRT (likely due to the
inherent weirdness of the scanout itself), unfortunately turning on vsync
causes the player physics to change and the input to become less accurately
timed…

rant /

This is because Quake3, like many games, ties the input update speed
directly to the graphics rendering. In such a case, the app usually polls
the state of the mouse/keyboard and since multiple changes might have
already occured between “now” and the last poll, i.e: input is lost. I’ve
been there, I’ve done that, and I got wiser since then. This is a solved
problem: buffered input and/or input collection on a different thread
[hack]. On the other hand, if a program is broken such that physics behaves
dramatically different due to higher/lower FPS, then the problem as less to
do with FPS and more to with WTF. I don’t think anyone should use the
previous examples as a guide for new behavior. Physics + fixed time step
where timestep =/= FPS is also a solved problem. Since this is an SDL
development mailing list, I’m assuming people here are generally are
developing games and (hopefully!) won’t make those mistakes. Take note!

The biggest argument I’ve seen among hardcore Quake players against vsync
is the input timing accuracy, it’s not that it makes the game look smoother
to render at higher redraw rates than the refresh, it’s that it feels more
responsive, often this is to do with the player physics behaving differently
at higher framerates (literally one can make some jumps and maneuvers at a
high framerate that one can not make at a lower framerate, this falls into
the category of physics exploits of course), or simply to do with the input
gathering being finer grained in such a situation.

I’ve heard this too. Someone on the IOQuake3 mailing list complained about
capped FPS making some jump impossible. It makes me want to facepalm.

As far as smoothness of motion (the observer situation), the higher the
refresh rate the better, and vsync should always be on, motion blur can be
simulated by superimposing multiple frames (accumulation buffer / temporal
antialiasing), or using advanced screenspace effects based on object motion
vectors stored during rendering (Object Space Motion Blur), or simply a
ghost of the previous frame (this is actually an inherent characteristic of
an LCD display, so it is quite amusing that developers often apply it as an
intentional effect as well).

As far as input latency (the hardcore player situation), the higher the
redraw rate the better, and vsync should be off, simply because the games
often behave better at extremely high framerates, not to do with the actual
visuals - this characteristic will not appear in screenshots or ingame video
capture, so it is hard to quantify objectively.

IMO, this is just bad advice. Redraw rate =/= input update rate. It can be
hard to separate the two if you use polling, which is why buffered input was
invented – so you don’t lose input data that is generated at a higher rate
than your FPS. It certainly isn’t that people are reacting faster due to
higher FPS. Humans don’t generally see > 60 FPS as Forest mentioned.
It’s super easy to tell who does polling if there is an in-game cursor,
artificially limit the FPS to 5, then move your mouse. If the cursor moves
in a way that is dramatically different for the same amount of motion when
running at >> 5 FPS, then it probably polls.

It is common knowledge that hardcore players refuse to use most 60hz LCD

monitors, but many find a 120hz LCD monitor to be quite acceptable, such
120hz LCD monitors (and a few 60hz ones) lack “scaler” hardware and thus can
only run at a single native resolution (direct scanout) as such hardware is
associated with higher input latency (2-3 refreshes delay) which is
considered unacceptable by hardcore players.

3 cheers for hardware vendors capitalizing on crappy software. When the
software loop caps at 60 FPS due to vsync, make a monitor with a higher
vsync to combat that! Does anyone here think that is an appropriate
solution? People are tricked into thinking more Hz = better games = win.
Again, facepalm, for all parties. You know I would buy a 120 Hz CRT/LCD?
Quad buffered stereoscopic rendering at 60 Hz per eye + shutter glasses. It
certainly isn’t because I can see so much more rich detail at 120Hz than at
60Hz. /rantOn Thu, Apr 21, 2011 at 7:06 PM, Forest Hale wrote:

This is more or less a problem with Windows. For the most part, both
video output and input have to be handled by the same thread. You can
queue up your input and deal with it later, but that introduces a
delay between receiving an event and processing it.

You can make it worse through bad programming though of course: if 10
events are queued up while blitting a frame to the screen and you only
process one event per frame instead of clearing the queue, the
remaining events have to wait up to 10 frames to be processed with
other events queuing up behind them.On 21 April 2011 20:51, Patrick Baggett <baggett.patrick at gmail.com> wrote:

This is because Quake3, like many games, ties the input update speed
directly to the graphics rendering.

Actually this has more to do with the fact these monitors are designed for use with 3D shutter glasses (and are branded as such), the fact that they can play games at 120hz in non-stereo rendering is
a very nice side effect however.

Worth noting that while there is not much more detail at 120fps/120hz than 60fps/60hz, the motion is far more fluid, which I chalk up to “real motion blur” (just because we can not perceive the frames
in their entirety, does not mean we do not perceive the blur trails as motion).

In general I favor outputting to a display at 120hz because it gives a genuine feeling of fluid “real” motion, rather than attempting to mimic this effect in software processing (everyone perceives
differently, after all).On 04/21/2011 05:51 PM, Patrick Baggett wrote:

On Thu, Apr 21, 2011 at 7:06 PM, Forest Hale <@Forest_Hale mailto:Forest_Hale> wrote:
It is common knowledge that hardcore players refuse to use most 60hz LCD monitors, but many find a 120hz LCD monitor to be quite acceptable, such 120hz LCD monitors (and a few 60hz ones) lack
"scaler" hardware and thus can only run at a single native resolution (direct scanout) as such hardware is associated with higher input latency (2-3 refreshes delay) which is considered
unacceptable by hardcore players.

3 cheers for hardware vendors capitalizing on crappy software. When the software loop caps at 60 FPS due to vsync, make a monitor with a higher vsync to combat that! Does anyone here think that is an
appropriate solution? People are tricked into thinking more Hz = better games = win. Again, facepalm, for all parties. You know I would buy a 120 Hz CRT/LCD? Quad buffered stereoscopic rendering at 60
Hz per eye + shutter glasses. It certainly isn’t because I can see so much more rich detail at 120Hz than at 60Hz. /rant


LordHavoc
Author of DarkPlaces Quake1 engine - http://icculus.org/twilight/darkplaces
Co-designer of Nexuiz - http://alientrap.org/nexuiz
"War does not prove who is right, it proves who is left." - Unknown
"Any sufficiently advanced technology is indistinguishable from a rigged demo." - James Klass
"A game is a series of interesting choices." - Sid Meier

This is because Quake3, like many games, ties the input update speed
directly to the graphics rendering.

This is more or less a problem with Windows. For the most part, both
video output and input have to be handled by the same thread. You can
queue up your input and deal with it later, but that introduces a
delay between receiving an event and processing it.

Trivially not true. I’ve already got code that handles OpenGL on one thread,
raw input messages on another. Same window.

The magic is that SwapBuffers() on Win32 requires the HDC of the window, and
doesn’t require the thread that called wglMakeCurrent() using that HDC to be
the same thread that created it. So in effect, as long as I never do any
drawing calls that would use that HDC from the message handling thread
(easy), there isn’t any critical section to synchronize on. Even if you use
good old Win32 messages instead of raw input, AttachThreadInput() allows you
to effectively detour message handling to a different thread. Compare that
to X11 which uses Display* from XNextEvent(). That same Display* must be
used in glXSwapBuffers() meaning that messaging handling must synchronize
around message handling.

It isn’t hard but it isn’t easy, and the not being easy is why it isn’t done
more often. Queuing input inserts a delay “sort of”, but in the case of > 60
FPS, the delay is literally unnoticeable. Consider if a person move their
mouse after frame A but before frame B. I get sent a raw input message that
is immediately handled on another thread, but is buffered. Here’s the
killer: the current time is recorded on that message. Then when I update the
game logic, I process all of the messages in order but I can use the
timestamp to differentiate actions if they have an implicit time order. I’ve
found that it makes 0 difference if you take into individual times rather
than if you just simply process in order. So yes, there is a delay, but is
generally in the order of < 1 frametime, and when the FPS is > 60, you can’t
literally perceive that. For < 60, yes, you can barely, but it is generally
preferable that you don’t have to move the mouse MORE to get the same amount
of movement at lower FPS. At < 10 FPS, there is a very noticeable gap
between when you move and when you see it, but if you didn’t buffer the
input, you may not even be able to move the mouse enough to reduce the
graphics down in the options menu to the point where the game is playable.

You can make it worse through bad programming though of course: if 10
events are queued up while blitting a frame to the screen and you only
process one event per frame instead of clearing the queue, the
remaining events have to wait up to 10 frames to be processed with
other events queuing up behind them.

Indeed.On Thu, Apr 21, 2011 at 8:14 PM, Kenneth Bull wrote:

On 21 April 2011 20:51, Patrick Baggett <@Patrick_Baggett> wrote:


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org