This is because Quake3, like many games, ties the input update speed
directly to the graphics rendering.
This is more or less a problem with Windows. ?For the most part, both
video output and input have to be handled by the same thread. ?You can
queue up your input and deal with it later, but that introduces a
delay between receiving an event and processing it.
Trivially not true. I’ve already got code that handles OpenGL on one
raw input messages on another. Same window.
The magic is that SwapBuffers() on Win32 requires the HDC of the window,
doesn’t require the thread that called wglMakeCurrent() using that HDC
the same thread that created it. So in effect, as long as I never do any
drawing calls that would use that HDC from the message handling thread
(easy), there isn’t any critical section to synchronize on. Even if you
good old Win32 messages instead of raw input, AttachThreadInput() allows
to effectively detour message handling to a different thread. Compare
to X11 which uses Display* from XNextEvent(). That same Display* must be
used in glXSwapBuffers() meaning that messaging handling must
around message handling.
Not following you here. ?I assume you are saying that you take a risk
of blocking on XNextEvent() which can keep you from being able to call
glXSwapBuffers() when you need to. Is that correct? If that is the
problem then the usual solution is to use connectionNumber() to get
the fd of the connection from the Display* and then play games with
the equivalent of select() to create a main loop that will react to an
input as soon as it is available without ever blocking in XNextEvent()
and allow you to swap the buffers when you want to. Add in
XEventsQueued() and you get to process all available events without
OTOH, if I am completely missing what you are talking about, please
let me know.
I’m referring the fact that the Display* is a pointer to structure allocated
by X11 and can be corrupted when two threads try to modify it. If you use
XNextEvent() or XPending() + XNextEvent() then you are accessing that
structure. Almost every X function requires the Display*. glXSwapBuffers()
also requires use of that Display*. Calling both function?simultaneously
allows the Display* to be corrupted.?Thus, while you could check for events
using select() [agreed], you could not dequeue them without synchronizing
with the renderer. XInitThreads() allows X calls to be re-entrant, solving
that problem, alternately, you can just provide your own synchronization.
WGL / GLX difference here: GLX can use remote X servers, thus the Display*
is needed to specify which connection, while on Win32/WGL a handle to the
window’s DC is all that is necessary and that is independent of any message
handling because the HDC is treated more or less as a buffer handle rather
than a logical connection.
So, we agree that this is a non-issue. So what is the issue?
Would the user notice this synchronization? I guess that depends on if you
are polling (XQueryPointer()) or actually doing buffered event handling
(XNextEvent()). Supposedly “hardcore gamer” can tell the difference.
I think the noticeable difference between using XNextEvent() and
XQueryPointer() is due to the round trip to and from the X server that
XQueryPointer does. It really does query the server and blocks until
it gets a reply. That can easily induce a “randomness” into updates
that the user will notice.
XEventsQueued() lets the developer decide if he wants to look only at
events in the input queue, if he wants to include events queued at the
OS level that have not yet been sent to the application, or, it can be
used to force a flush and wait for events to come back. Basically is
can be used as a substitute for XPending() or for XQlength(). By
removing the chance of a system call or, much worse, a round trip to
the server, you get more consistent response from the whole X system.
Most of what your are describing are actually bugs in the physics
simulation in the game. It has to do with how the simulated time is
accounted for, how the length of simulated time affects things like
round off errors, and the actual simulated time at which events are
applied. X events have a time stamp. While not always accurate, they
can be used to get approximately correct times for events into the
simulation. What often happens is that the events are all processed at
one real time, but they are also treated as having happened at the
same simulated time.
Because events are processed in batches for each frame at low frame
rates there is a long pause in both simulated and real time between
when events are allowed to effect the simulation. At high frame rates
the batches that are processed are smaller but the time between (both
real and simulated) processing the batches is also shorter.
As the old saying goes, “time is natures way of keeping everything
from happening at once.” In most games all the events that happen
during a frame are treated as happening at the end of the frame, not
at the times they actually happened. That will through any simulation
off. If events have time stamps then the current simulation time can
be set to the time of each event in order and the physics simulation
can be made to be consistent no matter what the frame rate is. This is
nothing new, it is the basic way that discrete event simulations (like
games) have always been done. But, it doesn’t seem to have ever made
its way into popular books on game development. Most likely because
the people writing the books didn’t know that decades of prior
knowledge existed on the subject.
Oh, yeah, it looks like XCB is a better substitute for Xlib. It
handles a lot of the buffering and delay problems very nicely.
Bob PendletonOn Thu, Apr 21, 2011 at 10:39 PM, Patrick Baggett <baggett.patrick at gmail.com> wrote:
On Thu, Apr 21, 2011 at 10:22 PM, Bob Pendleton <@Bob_Pendleton> wrote:
On Thu, Apr 21, 2011 at 8:45 PM, Patrick Baggett <baggett.patrick at gmail.com> wrote:
On Thu, Apr 21, 2011 at 8:14 PM, Kenneth Bull wrote:
On 21 April 2011 20:51, Patrick Baggett <baggett.patrick at gmail.com> wrote:
SDL mailing list
SDL at lists.libsdl.org