Frederik vom Hofe wrote:
Actually GLX at last has glXGetVideoSyncSGI to read out the vsync frame counter. I don’t know if there is an equivalent on windows.
How do you want to read out reactions as short as 5 Milliseconds?
60Hz screen = full frame update in 16,6_ Milliseconds
120 Hz screen = full frame update in 8,3_ Milliseconds
200 Hz screen = full frame update in 5 Milliseconds
You could limit yourself to only use the upper or lower part of the screen and thereby make the time for a “full frame update” proportional shorter.
Also flat screen pixels that change in a wide color range may need “a long time” to change. Gray to gray is often 3-1 ms, but black-white changes need more time. (And manufacturer informations on reaction timings are garbage)
And then there is the so called “input lag”. A fixed time delay between the sending of data to the screen and when the screen starts to change the first pixels. Some flat screens lag more then 25ms. But because it is fix, you could measure it and just set a variable in you program accordingly. Note: CRTs have input lag, too!
The graphic card and driver also causes some “input lag” but not that noticeable.
Still the easiest way is to not use vsync at all and render as fast as possible. Then you only have to compensate for input lag.
I’m aware of these limitations, and I myself also think that the obsession with display timing in psychology experiments is sometimes exaggerated ! But there are a ton of articles on this subject, and (some of) my colleagues can be rather persistent about this issue… it is not so much about displaying fast, but about accurate timing :
- we want to know as precise as possible WHEN a stimulus is presented on the screen.
- we also want to control HOW LONG it is visible with as much precision as possible
- the faster it is presented, the better (aka : higher refresh rate is better)
- we need to be able to inspect devices ALL the time
- all of this should be feasible in a “framework” where the experimenter has as much freedom as (s)he wants.
ideal would be if we would stop using ‘time’ as a measurement of onset for stimuli, but rather use ‘frames’. So instead of saying “present the stimulus at time 100”, we would say “present it after 10 frames” (on a 100Hz monitor). The problem is that I can not reliably count frames : if the OS takes the processor away for just enough time to miss one refresh, everything goes bananas. A “hook” or “interrupt” in the OpenGL core that triggers a custom function at every vertical blank would have been the solution, but hardware probably doesn’t support this, unless this glXGetVideoSyncSGI is exactly that ? My app needs to be Mac + Win
as you see : there is nothing really complicated, but there are a lot of issues involved which CAN lead to complex situations… If this was just a simple display-image-then-wait-for-device, things would be simple. But unfortunately i’m trying to write a FRAMEWORK in which the researchers can create their experiments, and thus I have to foresee execution paths that lead to bizarre results
thanks for replying, all of you !