jeroen clarysse wrote:
@Rainer : thx for the feedback
yeah… sigh… I know… mutexes make things a lot more complicated… I really have to decide what to do. I could force my users to use a multi core CPU, and thread scheduling would allow me to ensure that both threads are NOT running on the same core. So context switches here will be minimal… I think (correct me if i’m wrong !!)
It is really a choice between two evils :
PRO : we can use SetSwapInterval(1) so we KNOW for SURE that images are displayed in sync. By using the “flag” system i described earlyer, I can start my timer IMMEDIATELY after the RenderPresent has completed, so any user-related device input is synced to the end of the swap.
CON : risk of complications due to threads
CON : mutex code needed, which might slow down things a lot
CON : sdl uses its own thread also to handle events. (again : correct me if i’m wrong) It will take a lot of fine tuning to make sure these three threads don’t interfere
PRO : threading risks avoided, mutex bottleneck solved
CON : sdl is threaded anyway, so we are STILL based on threads !
CON : refresh rate is not a simple constant. You can’t just calculate it from a few swap() calls I have noticed. It seems to vary a bit : not much, but if you have a 100Hz display, that implies 12000 refreshes in 2 minutes (reasonably expectable trial length in our experiments). If you vary 0.05msec per refresh, you can end up with a 5 msec deviation in 1000 frames . So we’d have to periodically recalibrate this… but that would imply switching from SwapInterval(0) to SwapInterval(1) periodically… making htings quite complicated since I have to predict that this will NOT happen at a critical time in the experiment !
ideas ?
I have to either sacrifice sync-accuracy, or simplicity
the only really proper solution would be to have a GetScanLine() routine like DirectDraw has…
In both solutions (threaded and threadless), event processing is restricted by calls to SDL_PumpEvents, which is limited by vsync.
Unless you’re drawing fairly complex scenes, you certainly don’t need 10ms to render.
You should have plenty of time to also properly handle events in the main thread for each frame.
And SDL event thread:
Windows - no
Linux/X11 - no.
Mac OS X - no
In fact, I think BeOS might be the only system to use it (but don’t quote me on that).
Sik wrote:
Late to the discussion, but I really doubt SDL_GetTicks is going to be
even remotely useful for something like this, just out of accuracy. In
fact I think the minimum guaranteed accuracy is just 10ms, that’s
1/100th of a second, which goes to say how inaccurate it is.
Probably correct. It was just an idea, which it seems he has proven wrong.
jeroen clarysse wrote:
Late to the discussion, but I really doubt SDL_GetTicks is going to be
even remotely useful for something like this, just out of accuracy. In
fact I think the minimum guaranteed accuracy is just 10ms, that’s
1/100th of a second, which goes to say how inaccurate it is.
are you sure about that ? According to the documentation, SDL_getTicks is in msec… Looking in the SDL sources, it is a wrapper around gettimeofday(), which is msec also…
but if you are right, there is always SDL_GetPerformanceCounter(), which should be more accurate, right ?
interesting nonetheless !
gettimeofday is actually microsecond resolution.
And on Unix, SDL_GetPerformanceCounter also uses gettimeofday or clock_gettime, same as SDL_GetTicks, making accuracy comparable. On PSP and BeOS, it is literally just a proxy to SDL_GetTicks. In fact, SDL_GetPerformanceCounter is only useful on Windows, which is the only supported system that provides actual performance counters.
anyway, I think the 10ms guarantee is for portability reasons. Some platforms might not actually have any timing mechanism with greater precision than 10ms.------------------------
Nate Fries