Use of xcb (X protocol library) in the X11 backends

I’ve started doing some coding on an Xrender extension renderer for SDL
1.3. I am tempted to use the xcb API for this work as it has a number of
things making the code more straightforward and it also just looks
nicer. As you may know, the “new” xcb API with some care can be used in
parallel with Xlib since the xcb implementation really provides Xlib anyway.

xcb has been an official part of the X.org releases since 7.2, so it is
"official" and readily available in newer system distributions using
X.org. In this sense, on these modern X.org systems, it isn’t really an
additional dependency. At the same time, I don’t know of any practical
problem of downloading and building the xcb library standalone for older
systems or systems that don’t use X.org releases. My understanding is
that autoconf based xcb (which also has a native Win32 port) should be
sufficiently portable for SDL’s requirements.

I think it is a simple matter in the autoconf scripts to just test for
xcb automatically and still allow building all parts that use Xlib,
simply leaving out the parts that require xcb. Therefore, users with
ancient setups won’t be bothered to deal with xcb unless they want the
features built atop it. Anything based on an accelerated Xr extension is
going to imply a fairly recent server anyway.

Hopefully someone will give their 2 cents on this.

Thanks,
–jkl

I’ve started doing some coding on an Xrender extension renderer for SDL 1.3.
I am tempted to use the xcb API for this work as it has a number of things
making the code more straightforward and it also just looks nicer. As you
may know, the “new” xcb API with some care can be used in parallel with Xlib
since the xcb implementation really provides Xlib anyway.

I think that’s a brilliant idea!

You might want to consider doing a separate backend, so that on
systems that don’t have XCB installed, SDL can pick up Xlib and use
it.

Also, I think that even without using fancy Xrender, an SDL backend
using XCB would be able to take advantage of its better async
behaviour to provide faster performance, by basically using the X
server as a “render thread” or threat it as the GPU, where you send a
command, and it does the work without you having to wait for it. So
launching a blit could lock the texture internally (so that the user
can’t lock it themselves), send the X11 command, then unlock it when
we get the reply from the X server, at a later time.On Sat, Jun 20, 2009 at 10:32 AM, John K. Luebs wrote:


http://pphaneuf.livejournal.com/

Well, my first reaction was something like “XCB YooHOO!”. Then I thought I
ought to check on a couple of things. And, yep, XCB still don’t do OpenGL. I
didn’t check on XRender, I should have, but it looks like XCB sill doesn’t
"get" that high performance rendering on X is not done through the X
protocol and since XCB doesn’t address anything but X protocol it doesn’t
support high performance OpenGL. If, and I repeat, if XRender works anything
like OpenGL does on X then XCB will not support high performance XRender
either.

High performance immediate mode rendering requires a data path that is as
direct as possible from the source of the data to the hardware that is
rendering the data. The X protocol, even with shared memory transport,
requires that data be written by one process, read by another process, and
then passed to the kernel through a device driver. That means that at least
two copies must be made and two or more process switches are required. To
get around this OpenGL support in XLib only uses X protocol to coordinate
with the X server over the location and state of the window. The rendering
commands are sent directly to the device driver by the application without
ever going anywhere near the X server. (To the best of my knowledge I am the
first person to ever implement direct rendering under X back in '89 and
’90.)

It looks like XCB is not, and may never be, a complete replacement for XLib.
That means we can either have XCB and XLib in SDL, or we can just have XLib.
For the sake of simplicity, if for nothing else, we should just stick with
XLib. When XCB gets around to supporting direct rendering I may change my
mind. But, their total focus on the protocol and not on the reality of X
does not give me confidence in their ability to do that.

The reasons given for developing XCB could just as well be used to justify
fixing XLib. And, considering that XCB is 8+ years old and is just now
(based on blog postings from June 11, 2009
http://kesakoodi2k9.wordpress.com/2009/06/11/discoveries-about-xcb-xlib-glx-and-opengl/)
realizing that they don’t get direct rendering I don’t expect to see the
problems solved any time soon.

So, as much as I understand not wanting to use XLib, belive me, I understand
it very well considering that I have been dealing with it, off and on, since
X11R3, I can’t say that I think switching to XCB is a good idea. XCB is not
a complete replacement for XLib, using it will add a requirement and
increase the code size without adding any value to SDL.

Bob PendletonOn Sat, Jun 20, 2009 at 9:32 AM, John K. Luebs wrote:

I’ve started doing some coding on an Xrender extension renderer for SDL
1.3. I am tempted to use the xcb API for this work as it has a number of
things making the code more straightforward and it also just looks nicer. As
you may know, the “new” xcb API with some care can be used in parallel with
Xlib since the xcb implementation really provides Xlib anyway.

xcb has been an official part of the X.org releases since 7.2, so it is
"official" and readily available in newer system distributions using X.org.
In this sense, on these modern X.org systems, it isn’t really an additional
dependency. At the same time, I don’t know of any practical problem of
downloading and building the xcb library standalone for older systems or
systems that don’t use X.org releases. My understanding is that autoconf
based xcb (which also has a native Win32 port) should be sufficiently
portable for SDL’s requirements.

I think it is a simple matter in the autoconf scripts to just test for xcb
automatically and still allow building all parts that use Xlib, simply
leaving out the parts that require xcb. Therefore, users with ancient setups
won’t be bothered to deal with xcb unless they want the features built atop
it. Anything based on an accelerated Xr extension is going to imply a fairly
recent server anyway.

Hopefully someone will give their 2 cents on this.

Thanks,
–jkl


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Bob Pendleton wrote:

Well, my first reaction was something like “XCB YooHOO!”. Then I
thought I ought to check on a couple of things. And, yep, XCB still
don’t do OpenGL. I didn’t check on XRender, I should have, but it
looks like XCB sill doesn’t “get” that high performance rendering on X
is not done through the X protocol and since XCB doesn’t address
anything but X protocol it doesn’t support high performance OpenGL.
If, and I repeat, if XRender works anything like OpenGL does on X then
XCB will not support high performance XRender either.
I don’t think this is entirely correct. Xlib itself doesn’t do OpenGL
either nor does it do anything outside the X protocol. I think there is
some confusion here about definition and scope. When I refer to “Xlib” I
am referring to the classic X protocol client library. I thought this
was the commonly accepted definition. So speaking about Xlib necessarily
involves the X protocol. XCB is a replacement for Xlib, not a generic
direct rendering infrastructure. So direct rendering is outside the
scope of xcb (as it is with Xlib)

Strictly, XRender is simply an X11 protocol extension. While I guess it
would be theoretically possible to present a libXrender like API that
uses a non X based implementation I have never heard of such a thing and
I don’t know of plans to create such a thing. Besides, such a thing
wouldn’t really be “X” at all anymore.

This whole thing started because of an interest to provide Xrender
services to SDL 1.3 as an alternative to OpenGL renderer services. The
reason is that Xrender (even using the X protocol) is sometimes a
better backend solution than OpenGL for 2D acceleration concerns. There
are a couple of reasons that I am concerned with: 1) Certain
embedded/lowend graphics hardware I am interested in don’t provide an
OpenGL “model” 2) For some of this hardware, the EXA implementation in
the X server works very well… You get the benefit of whatever EXA
provides via Xrender.

It looks like XCB is not, and may never be, a complete replacement for
XLib. That means we can either have XCB and XLib in SDL, or we can
just have XLib. For the sake of simplicity, if for nothing else, we
should just stick with XLib. When XCB gets around to supporting direct
rendering I may change my mind. But, their total focus on the protocol
and not on the reality of X does not give me confidence in their
ability to do that.
Neither Xlib nor xcb are directly concerned with the issue of a direct
rendering infrastructure, nor with OpenGL, so that point is moot. xcb
already is a replacement for Xlib. The issue that is talked about in the
blog post you linked to has nothing to do with the issue of direct
rendering. Instead it is that certain parts of the GLX API are tied to
the types and abstractions of Xlib. In this sense, the GLX API is
dependent on Xlib, and xcb really can’t do anything about that. There
are many existing applications and libraries dependent on Xlib, and this
will be the case indefinitely.

The reasons given for developing XCB could just as well be used to
justify fixing XLib. And, considering that XCB is 8+ years old and
is just now (based on blog postings from June 11, 2009
http://kesakoodi2k9.wordpress.com/2009/06/11/discoveries-about-xcb-xlib-glx-and-opengl/)
realizing that they don’t get direct rendering I don’t expect to see
the problems solved any time soon.
I really think your interpretation of the blog post and the development
issue at hand are mistaken and based on incorrect information.
xcb is already a replacement for Xlib. In X.org releases, for example,
Xlib itself is written in terms of xcb. If you are using a newer X.org
release you are using libxcb, as libX11 (the core part of Xlib) is
written in terms of it and dependent on it.

Understand that xcb is used in OpenGL applications and has been for a
while (compiz for example). The person writing the blog you mentioned is
obviously just getting started in the world of X and is discovering and
documenting things that have been known and done for a number of years.
The direct rendering of the OpenGL implementation and the use of xcb for
X protocol services are completely orthogonal concerns.

So, as much as I understand not wanting to use XLib, belive me, I
understand it very well considering that I have been dealing with it,
off and on, since X11R3, I can’t say that I think switching to XCB is
a good idea. XCB is not a complete replacement for XLib, using it will
add a requirement and increase the code size without adding any value
to SDL.
I don’t think using xcb will necessarily increase the code size in a
meaningful way since on most modern systems supporting xcb, it will be
using it for everything anyway (i.e. it will be the core of Xlib
itself). As for adding a requirement, it isn’t one now, but the code
(SDL Xrender backend) doesn’t exist as a feature yet anyway so there
will have to be an additional requirement on either libXrender or xcb.
As I mentioned in my original email, since xcb and Xlib code can
coexist, lack of xcb would only imply losing the specific functionality
written against xcb.

As I consider it more, the question really ends up being: Is there any
point to favor libXrender over xcb-render. I really don’t think there
is, but I threw it out to the list before getting carried away on code I
would rather see merged into SDL 1.3 than not.

–jkl

Bob Pendleton wrote:

Well, my first reaction was something like “XCB YooHOO!”. Then I thought I ought to check on a couple of things. And, yep, XCB still don’t do OpenGL. I didn’t check on XRender, I should have, but it looks like XCB sill doesn’t “get” that high performance rendering on X is not done through the X protocol and since XCB doesn’t address anything but X protocol it doesn’t support high performance OpenGL. If, and I repeat, if XRender works anything like OpenGL does on X then XCB will not support high performance XRender either.

I don’t think this is entirely correct. Xlib itself doesn’t do OpenGL either nor does it do anything outside the X protocol.

No XLib doesn’t but GLX does. GLX is a protocol extension that is able
to transport 100% of all OpenGL APIs over an X protocol connection. It
isn’t just for the XGL functions that we normally think of. Take a
look at www.opengl.org/documentation/specs/glx/GLXprotocol.ps When
direct rendering is in use the API calls take a left turn to the
direct rendering interface and do not go through X protocol or an
extension to X protocol they go directly through the direct rendering
interface.

I think there is some confusion here about definition and scope. When I refer to “Xlib” I am referring to the classic X protocol client library. I thought this was the commonly accepted definition.

It is the commonly accepted meaning. Just because that is the commonly
accepted meaning doesn’t mean that Xlib can’t send information through
an alternate route. Or, even has to send the information anywhere. I
once wrote a subset of Xlib that talked directly to the graphics
hardware on a machine for which no X server ever existed. Xlib is an
API specification and is not necessarily associated with the X
protocol. In the same way, GLX provides support for two protocols, one
of which always goes to the X server and one that goes to the X server
on a remote machine but makes use of direct rendering on the local
machine.

So speaking about Xlib necessarily involves the X protocol. XCB is a replacement for Xlib, not a generic direct >rendering infrastructure. So direct rendering is outside the scope of xcb (as it is with Xlib)

I hope I have dissuaded you of that belief. I know that what I have
just said about Xlib contradicts what “everyone knows about Xlib” but
what I have said is true and accurate.

The logical place to put a link to direct rendering is in the GLX and
Xrender extension libraries. That is why I am worried that they would
not be properly handled by something described as a pure X protocol
binding.

I freely admit that I do not know the answer to the question. But, you
have not addressed my original question. If XRender is implement using
direct rendering on any system, will it still work over direct
rendering if you use XCB?

Strictly, XRender is simply an X11 protocol extension.

Not really. XRender is an API, a protocol extension, and an
implementation. From a programs point of view it is just an API. The
API plus the implementation can be used without the protocol. Since it
is part of X there must be an implementation in the X server for use
by remote computers, but it could also be implemented through direct
rendering. A protocol by itself is nothing. Both ends of the protocol
must be implemented. An API is not a protocol, but it can be used to
generate one. A protocol is neither and API nor an implementation.

While I guess it would be theoretically possible to present a >libXrender like API that uses a non X based >implementation I have never heard of such a thing and I don’t know of >plans to create such a thing. Besides, such >a thing wouldn’t really be “X” at all anymore.

X hasn’t really been X for at least 20 years. The original X protocol
concept was fine for low performance terminals but turned out to be
disastrously wrong for high performance immediate mode 3D graphics. We
figured that out back in the '80s. That is why the original plan for
high performance 3d graphics under X was based on a scene graph
systems called PEX which was based on PHIGS. You can edit a scene
graph quickly enough over the wire, but immediate mode rendering
bites. Anyway… Been there, Done that, Got the t-shirt, and I can see
it hanging in the back of my closet right now :slight_smile: (Really, I can see
my PEX Pot T-shirt where it is hanging in my closet where I put it
shortly after Scheifler handed it to me) Yeah, I know, you don’t know
what I’m talking about because it is ancient history, don’t worry
about it. PEX really sucked.

If you can assure me that using XCB will not interfere with direct
rendering, then not only will I drop my objection. I will even propose
replacing the whole Xlib backend with one based on XCB. I really like
XCB.

Bob Pendleton

On Sun, Jun 21, 2009 at 11:25 AM, John K. Luebs wrote:

–jkl


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Den Sun, 21 Jun 2009 20:11:21 -0500
skrev Bob Pendleton :

If you can assure me that using XCB will not interfere with direct
rendering, then not only will I drop my objection. I will even propose
replacing the whole Xlib backend with one based on XCB. I really like
XCB.

How available is XCB though? It’s not much of an issue for people
using distro packaged SDL or even building their own, but many games and
such include their own version of libraries to avoid requiring the user
to install all kinds of dependencies when they just want to play some
game, and also to ensure that a working version of the required libs are
available (eg SDL 1.2.8 had an overly restrictive maximum surface
height that broke some games). If SDL moves to require XCB for
everything, this won’t ‘Just Work’ on systems without it…

  • Gerry

I thought autoconf would allow you to build without XCB support, just like you can build it without OpenGL support.

After all, not everyone will necessarily have it.

As long as it’s an extra, and not an absolute requirement, I don’t see any problems.

Pat> ----- Original Message -----

From: Gerry JJ
To: sdl at lists.libsdl.org
Sent: Monday, June 22, 2009 7:11:30 AM
Subject: Re: [SDL] Use of xcb (X protocol library) in the X11 backends

Den Sun, 21 Jun 2009 20:11:21 -0500
skrev Bob Pendleton :

If you can assure me that using XCB will not interfere with direct
rendering, then not only will I drop my objection. I will even propose
replacing the whole Xlib backend with one based on XCB. I really like
XCB.

How available is XCB though? It’s not much of an issue for people
using distro packaged SDL or even building their own, but many games and
such include their own version of libraries to avoid requiring the user
to install all kinds of dependencies when they just want to play some
game, and also to ensure that a working version of the required libs are
available (eg SDL 1.2.8 had an overly restrictive maximum surface
height that broke some games). If SDL moves to require XCB for
everything, this won’t ‘Just Work’ on systems without it…

  • Gerry

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I thought autoconf would allow you to build without XCB support, just like
you can build it without OpenGL support.

XCB, like Xlib, is much more central to SDL than OpenGL. If the X11/Linux
version required XCB, and you do not have it, then, in the case under
discussion, XRender would not work, even if you have it. If we changed the
whole backend to use XCB, then SDL would not work at all, it could not talk
to the keyboard or mouse and it could not open a window.

After all, not everyone will necessarily have it.

As long as it’s an extra, and not an absolute requirement, I don’t see any
problems.

That is the thing, it is an extra that other things would depend on, and it
might become an absolute requirement.

IMHO, To use XCB, or not to use XCB, is a critical question.On Mon, Jun 22, 2009 at 9:15 AM, Patryk Bratkowski wrote:

Pat

----- Original Message ----

From: Gerry JJ
To: sdl at lists.libsdl.org
Sent: Monday, June 22, 2009 7:11:30 AM
Subject: Re: [SDL] Use of xcb (X protocol library) in the X11 backends

Den Sun, 21 Jun 2009 20:11:21 -0500
skrev Bob Pendleton :

If you can assure me that using XCB will not interfere with direct
rendering, then not only will I drop my objection. I will even propose
replacing the whole Xlib backend with one based on XCB. I really like
XCB.

How available is XCB though? It’s not much of an issue for people
using distro packaged SDL or even building their own, but many games and
such include their own version of libraries to avoid requiring the user
to install all kinds of dependencies when they just want to play some
game, and also to ensure that a working version of the required libs are
available (eg SDL 1.2.8 had an overly restrictive maximum surface
height that broke some games). If SDL moves to require XCB for
everything, this won’t ‘Just Work’ on systems without it…

  • Gerry

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Not to use XCB. If you look at GTK, you can see how to mix XRender images
and XImage images fairly cleanly.On Mon, Jun 22, 2009 at 11:48 AM, Bob Pendleton wrote:

On Mon, Jun 22, 2009 at 9:15 AM, Patryk Bratkowski wrote:

I thought autoconf would allow you to build without XCB support, just like
you can build it without OpenGL support.

XCB, like Xlib, is much more central to SDL than OpenGL. If the X11/Linux
version required XCB, and you do not have it, then, in the case under
discussion, XRender would not work, even if you have it. If we changed the
whole backend to use XCB, then SDL would not work at all, it could not talk
to the keyboard or mouse and it could not open a window.

After all, not everyone will necessarily have it.

As long as it’s an extra, and not an absolute requirement, I don’t see any
problems.

That is the thing, it is an extra that other things would depend on, and it
might become an absolute requirement.

IMHO, To use XCB, or not to use XCB, is a critical question.

Pat

----- Original Message ----

From: Gerry JJ
To: sdl at lists.libsdl.org
Sent: Monday, June 22, 2009 7:11:30 AM
Subject: Re: [SDL] Use of xcb (X protocol library) in the X11 backends

Den Sun, 21 Jun 2009 20:11:21 -0500
skrev Bob Pendleton :

If you can assure me that using XCB will not interfere with direct
rendering, then not only will I drop my objection. I will even propose
replacing the whole Xlib backend with one based on XCB. I really like
XCB.

How available is XCB though? It’s not much of an issue for people
using distro packaged SDL or even building their own, but many games and
such include their own version of libraries to avoid requiring the user
to install all kinds of dependencies when they just want to play some
game, and also to ensure that a working version of the required libs are
available (eg SDL 1.2.8 had an overly restrictive maximum surface
height that broke some games). If SDL moves to require XCB for
everything, this won’t ‘Just Work’ on systems without it…

  • Gerry

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


-Sam Lantinga, Founder and President, Galaxy Gameworks LLC

XCB, like Xlib, is much more central to SDL than OpenGL. If the X11/Linux
version required XCB, and you do not have it, then, in the case under
discussion, XRender would not work, even if you have it. If we changed the
whole backend to use XCB, then SDL would not work at all, it could not talk
to the keyboard or mouse and it could not open a window.

But you could definitely have a completely separate XCB backend, and
enable it only if the XCB library is available. In which case, you
have a working SDL in every case that work today, and a faster SDL on
X11 than you would have with Xlib, if you do have it.

And XRender works using XCB:
http://xcb.freedesktop.org/manual/group__XCB__Render__API.html

In fact, since current-day Xlib uses XCB under the cover, you can
pretty much assume that if it can be done with Xlib, it can be done
with XCB (and probably faster, through better parallelism with the X
server).

The only real downside of making an XCB backend is that to exploit the
latency hiding features (which would be the real attraction for using
XCB over Xlib), you’d pretty much have to write it from scratch.
Adapting the existing Xlib backend would certainly be possible, but
wouldn’t offer any advantage.

So if an XCB backend falls from the sky, it’d be nuts not to at least
make it available in SDL.On Mon, Jun 22, 2009 at 2:48 PM, Bob Pendleton wrote:


http://pphaneuf.livejournal.com/

XCB, like Xlib, is much more central to SDL than OpenGL. If the X11/Linux
version required XCB, and you do not have it, then, in the case under
discussion, XRender would not work, even if you have it. If we changed the
whole backend to use XCB, then SDL would not work at all, it could not talk
to the keyboard or mouse and it could not open a window.

But you could definitely have a completely separate XCB backend, and
enable it only if the XCB library is available. In which case, you
have a working SDL in every case that work today, and a faster SDL on
X11 than you would have with Xlib, if you do have it.

And XRender works using XCB:
http://xcb.freedesktop.org/manual/group__XCB__Render__API.html

In fact, since current-day Xlib uses XCB under the cover, you can
pretty much assume that if it can be done with Xlib, it can be done
with XCB (and probably faster, through better parallelism with the X
server).

The only real downside of making an XCB backend is that to exploit the
latency hiding features (which would be the real attraction for using
XCB over Xlib), you’d pretty much have to write it from scratch.
Adapting the existing Xlib backend would certainly be possible, but
wouldn’t offer any advantage.

Can you tell me if it is “latency hiding” or “latency eliminating”?
Two very different things.

Latency was a serious problem with X back in the olden days of ancient
computers… you know, back in the '80s and '90s. It is really not a
problem on modern computers. Between the move to multicore processors
and fixing the Linux scheduler it really is not much of a problem at
all. (For years X on Linux was badly hurt by the Linux scheduler that
was very efficient for servers but not for desktop use.)

The only way I know of to actually reduce, or eliminate, X latency at
this point is to make a dramatic change in the way it is implemented.
One approach is to add an OS feature to allow multithreaded direct
calls into a special class of shared libraries that have state that is
per library, not per thread. That is, shared libraries that are aware
of multiple threads running within them that so they can act as
servers. Events would be delivered via call backs registered with the
server library. Essentially you need to allow on process to make a
protected call directly into another running process. That would allow
the X server to become a shared library and their would be less
latency than there is now in OSes like Windows and Mac OSX. Several
papers have been written on the subject over the last 20 years. No one
has ever implemented it. And, to the best of my knowledge no one has
tried to implement it.

Another way to improve X latency is to convert it to a device driver.
I do know of at least one case where the X server was turned into a
device driver. Doing that resulted in wonderful low latency X. But, a
server crash became kernel crash. I do not believe many people would
support turning X into a device driver on Linux. Even though it would
provide a wonderful graphics system then cost in kernel reliability
would not be tolerated by most.

Bob PendletonOn Sat, Jun 27, 2009 at 2:12 AM, Pierre Phaneuf wrote:

On Mon, Jun 22, 2009 at 2:48 PM, Bob Pendleton<@Bob_Pendleton> wrote:

So if an XCB backend falls from the sky, it’d be nuts not to at least
make it available in SDL.


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Can you tell me if it is “latency hiding” or “latency eliminating”?
Two very different things.

What XCB allows one to do is basically to separate all the
round-tripping requests into a sending and a receiving part, allowing
you to do other stuff while answers come back. Of most interest to
SDL, you can split the XSync calls (for example, the one in
SDL_RenderPresent! ouch!) into a non-blocking “send XSync request” and
then get a notification when the XSync is completed (unlocking a
texture, for example).

If you remember the SDL_ASYNCBLIT flag in SDL 1.2, this would allow
you to do this, without using threads.On Sat, Jun 27, 2009 at 1:35 PM, Bob Pendleton wrote:


http://pphaneuf.livejournal.com/

The only real downside of making an XCB backend is that to exploit the
latency hiding features (which would be the real attraction for using
XCB over Xlib), you’d pretty much have to write it from scratch.
Adapting the existing Xlib backend would certainly be possible, but
wouldn’t offer any advantage.

It looks like we need more infrastructure before we could use it with
OpenGL anyhow…

Technical improvements in xcb aside, deprecating the Xlib API makes the
world a better place by default. If it gets to the point where we can
use glX, or some equivalent, without an Xlib layer, I’ll write the SDL
video target for it.

–ryan.

It looks like we need more infrastructure before we could use it with OpenGL
anyhow…

http://kesakoodi2k9.wordpress.com/2009/06/11/discoveries-about-xcb-xlib-glx-and-opengl/

Ah, yes, libGL makes calls right into libX11. This is rather annoying,
as it’s strictly a library ABI thing (libX11 will then translate those
calls right into XCB calls), but you’ve got to have libX11 in the
picture if you want to use OpenGL.

Technical improvements in xcb aside, deprecating the Xlib API makes the
world a better place by default. If it gets to the point where we can use
glX, or some equivalent, without an Xlib layer, I’ll write the SDL video
target for it.

As this guy points out, we’re still stuck with it for a bit. I think
you could still win some with an XCB backend that does
XSetEventQueueOwner, XGetXCBConnection, and then doesn’t look back.

I’m looking at you, XSync in SDL_RenderPresent! ;-)On Mon, Jun 29, 2009 at 5:07 PM, Ryan C. Gordon wrote:


http://pphaneuf.livejournal.com/

Can you tell me if it is “latency hiding” or “latency eliminating”?
Two very different things.

What XCB allows one to do is basically to separate all the
round-tripping requests into a sending and a receiving part, allowing
you to do other stuff while answers come back. Of most interest to
SDL, you can split the XSync calls (for example, the one in
SDL_RenderPresent! ouch!) into a non-blocking “send XSync request” and
then get a notification when the XSync is completed (unlocking a
texture, for example).

This isn’t making any sense to me. XSync is supposed force the output
queue to be flushed and to wait for the results from that flush. The
idea is to force the state in the client and the server to be
synchronized. Doing what you suggest using XCB, splitting the request
and the reply, is the same as the result of using XFlush. The output
queue is flushed and the results come back when they come back. If you
can live without an actual synchronization of states between the
client and the server why are you using XSync instead of XFlush?

Well to answer my own question, it could be you need the effect of the
discard flag in XSync. But, looking in /src/video/x11/* while I see a
bunch of xsync calls, only one has discard set to true. The others
might work just fine with an XFlush instead. Generally XSync is for
when you must know about an error right now, if you can live with
finding out later, which is what you are going to get using XCB,
XFlush should be used? Not so?

Bob PendletonOn Mon, Jun 29, 2009 at 4:04 PM, Pierre Phaneuf wrote:

On Sat, Jun 27, 2009 at 1:35 PM, Bob Pendleton<@Bob_Pendleton> wrote:

If you remember the SDL_ASYNCBLIT flag in SDL 1.2, this would allow
you to do this, without using threads.


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

This isn’t making any sense to me. XSync is supposed force the output
queue to be flushed and to wait for the results from that flush. The
idea is to force the state in the client and the server to be
synchronized. Doing what you suggest using XCB, splitting the request
and the reply, is the same as the result of using XFlush. The output
queue is flushed and the results come back when they come back. If you
can live without an actual synchronization of states between the
client and the server why are you using XSync instead of XFlush?

Because not only do we want to make sure our requests have been sent
to the X server, but we want to know when they have been completed. It
basically only sends a bogus command (GetInputFocus, in fact) that
sends a reply, and waits for the reply synchronously, just ignoring it
(and then optionally ditches the content of the event queue). Since X
processes requests in order, once you get the reply, it means that
everything before has been processed.

Well to answer my own question, it could be you need the effect of the
discard flag in XSync. But, looking in /src/video/x11/* while I see a
bunch of xsync calls, only one has discard set to true. The others
might work just fine with an XFlush instead. Generally XSync is for
when you must know about an error right now, if you can live with
finding out later, which is what you are going to get using XCB,
XFlush should be used? Not so?

The goal of the XSync in the SDL_RenderPresent is to know when the
XCopyArea is done, probably in order to behave in a way similar to
glXSwapBuffers and cap the frame rate. Except wouldn’t it be nice if
it actually behaved like glXSwapBuffers does, and only block if
there was another SDL_RenderPresent still ongoing? So this way, you
could actually start doing work on the next frame, much like you would
with OpenGL, and the X server would grind in the background, doing the
blit, and you’d actually use those multiple cores without using
threads (well, would only be useful if you have a shitty X11 hardware
driver, but that’s not that bad of an assumption to make,
unfortunately).

I’m not sure what’s the state of these things in 1.3, but there’s also
cases like with MITSHM, where I don’t remember if XShmPutImage
round-trips (and so, is blocking), or if you’re just not supposed to
touch the XImage until it’s done (so you have no choice but to XSync
before you use that XImage again, unless you’re the confident/praying
type), but with XCB you could have SDL behave as if the texture was
locked until the reply saying that it’s done comes back, and you would
be free to do other things in the meantime (your program would only
block if you tried to lock that texture).

One last detail: does anyone know why there’s a XSync with the
"discard" flag set to true? This could randomly lose some events when
creating a streaming texture?!? Betting that XSync(True) calls are a
bug is usually a winning bet… ;-)On Mon, Jun 29, 2009 at 6:45 PM, Bob Pendleton wrote:


http://pphaneuf.livejournal.com/

This isn’t making any sense to me. XSync is supposed force the output
queue to be flushed and to wait for the results from that flush. The
idea is to force the state in the client and the server to be
synchronized. Doing what you suggest using XCB, splitting the request
and the reply, is the same as the result of using XFlush. The output
queue is flushed and the results come back when they come back. If you
can live without an actual synchronization of states between the
client and the server why are you using XSync instead of XFlush?

Because not only do we want to make sure our requests have been sent
to the X server, but we want to know when they have been completed. It
basically only sends a bogus command (GetInputFocus, in fact) that
sends a reply, and waits for the reply synchronously, just ignoring it
(and then optionally ditches the content of the event queue). Since X
processes requests in order, once you get the reply, it means that
everything before has been processed.

I believe that is what I said, you use XSync when you want to force
the client state and the server state to be synchronized.

Well to answer my own question, it could be you need the effect of the
discard flag in XSync. But, looking in /src/video/x11/* while I see a
bunch of xsync calls, only one has discard set to true. The others
might work just fine with an XFlush instead. Generally XSync is for
when you must know about an error right now, if you can live with
finding out later, which is what you are going to get using XCB,
XFlush should be used? Not so?

The goal of the XSync in the SDL_RenderPresent is to know when the
XCopyArea is done, probably in order to behave in a way similar to
glXSwapBuffers and cap the frame rate.

glXSwapBuffers doesn’t cap the frame rate unless you have your machine
configured to sync to vblank. So, if the goal is to work like
glXSwapBuffers then they should be using XFlush.

Except wouldn’t it be nice if
it actually behaved like glXSwapBuffers does, and only block if
there was another SDL_RenderPresent still ongoing?

Yes, it would. In GL we use double and triple buffering to achieve
that effect. In X we use the instruction queue to achieve the same
effect. We put the commands in a queue, flush them to make sure they
have reached the X server and then continue on. Just as GL double and
triple buffers images, X buffers commands. Your X code will block when
the queue is full just the same way GL code will block when there is
no buffer available.

So this way, you
could actually start doing work on the next frame, much like you would
with OpenGL, and the X server would grind in the background, doing the
blit, and you’d actually use those multiple cores without using
threads

Which is why you use XFlush instead of XSync. It gets you that behavior.

If you want to ensure that you do not queue up to many operations
there are ways to do that. The easiest is to use XSendEvent to send a
fake event. Before sending it you increment a counter and after
sending it you XFlush. When the fake event comes back to the event
processing code you decrement the counter and discard the event. If
you find that the counter is higher than you like, say 1, 2 or 3, you
can XSync which should result in a zero counter and then go on. That
would do what you want, without such frequent calls to XSync. It would
also work the pretty much the way you want from using XCB’s version of
XSync. If you set you limit to 1 then you could have one set of
instructions running on the server while you are busy building a new
set on the client.

If the main reason you want XCB is to allow parallel operation between
the client and the server by use of a round trip then use the
technique that was built in to Xlib from the beginning to allow that
to work.

Bob Pendleton

(well, would only be useful if you have a shitty X11 hardware
driver, but that’s not that bad of an assumption to make,
unfortunately).

I’m not sure what’s the state of these things in 1.3, but there’s also
cases like with MITSHM, where I don’t remember if XShmPutImage
round-trips (and so, is blocking), or if you’re just not supposed to
touch the XImage until it’s done (so you have no choice but to XSync
before you use that XImage again, unless you’re the confident/praying
type), but with XCB you could have SDL behave as if the texture was
locked until the reply saying that it’s done comes back, and you would
be free to do other things in the meantime (your program would only
block if you tried to lock that texture).

XShmPutImage has an option to request that an event be sent to tell
you when it is safe to touch the image again. So, you have the option
of either XSync-ing to make sure the operations are done or to wait
for an event.

Again, I see no need to switch to XCB to get the behavior you want.

Bob PendletonOn Mon, Jun 29, 2009 at 7:16 PM, Pierre Phaneuf wrote:

On Mon, Jun 29, 2009 at 6:45 PM, Bob Pendleton<@Bob_Pendleton> wrote:

One last detail: does anyone know why there’s a XSync with the
"discard" flag set to true? This could randomly lose some events when
creating a streaming texture?!? Betting that XSync(True) calls are a
bug is usually a winning bet… :wink:


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Yes, it would. In GL we use double and triple buffering to achieve
that effect. In X we use the instruction queue to achieve the same
effect. We put the commands in a queue, flush them to make sure they
have reached the X server and then continue on. Just as GL double and
triple buffers images, X buffers commands. Your X code will block when
the queue is full just the same way GL code will block when there is
no buffer available.

Even without vsync, glXSwapBuffers will induce a delay somewhere so
that you’re never working on more than one frame at a time (with
double buffering, anyway), won’t it? My ex-Discreet Logic co-worker
tells me that it usually either blocks if you do a second one before
the first one is finished, or the next drawing command after the
glXSwapBuffers will block, preventing client state from overrunning
server state (or, often, hardware state) by too much.

In X, there’s no such throttling, other than the size/length of the
queue itself, so you have to provide it yourself. You can throw a
lot of XCopyArea at the X server in a second, giving it work for
many seconds to come.

But…

If you want to ensure that you do not queue up to many operations
there are ways to do that. The easiest is to use XSendEvent to send a
fake event. Before sending it you increment a counter and after
sending it you XFlush. When the fake event comes back to the event
processing code you decrement the counter and discard the event. If
you find that the counter is higher than you like, say 1, 2 or 3, you
can XSync which should result in a zero counter and then go on. That
would do what you want, without such frequent calls to XSync. It would
also work the pretty much the way you want from using XCB’s version of
XSync. If you set you limit to 1 then you could have one set of
instructions running on the server while you are busy building a new
set on the client.

… this is an excellent trick, and I think should do very well, for a
game development library, anyway.

For the general case of “real” GUI applications, XCB is still better,
because there’s a number of functions that round-trip to the X server
while other things could be done (copy/paste or drag and drop, say),
and you still end up having silly slow downs. Only a few years back,
the main thing making Firefox start really really slowly over a long
distance network was a long series of XQueryExtension, for example,
which could very well have been all sent together, then all received
together (which the XCB API allows).

But in SDL, there probably isn’t enough things like that (how often do
we really do

XCB provides a fully generic way of accomplishing this, by simply
making no function be blocking, and cutting them all in send and
receive phases. But since my main problem with SDL is that none of
what it does is blocking (except for this XSync that’s being called
all the time), which is kind of the opposite. A “real” GUI application
uses a much wider range of X commands, so the usefulness of XCB would
be much more important, but for SDL, this should do.

We’d just need to get rid of that XSync in SDL, then (I think the
others are all during the setup of MITSHM stuff, sounds okay to me).

XShmPutImage has an option to request that an event be sent to tell
you when it is safe to touch the image again. So, you have the option
of either XSync-ing to make sure the operations are done or to wait
for an event.

Right, I remember now that I didn’t use it because I didn’t think I
had an equivalent for the non-MITSHM case, but I was mistaken.On Tue, Jun 30, 2009 at 4:34 PM, Bob Pendleton wrote:


Yes, it would. In GL we use double and triple buffering to achieve
that effect. In X we use the instruction queue to achieve the same
effect. We put the commands in a queue, flush them to make sure they
have reached the X server and then continue on. Just as GL double and
triple buffers images, X buffers commands. Your X code will block when
the queue is full just the same way GL code will block when there is
no buffer available.

Even without vsync, glXSwapBuffers will induce a delay somewhere so
that you’re never working on more than one frame at a time (with
double buffering, anyway), won’t it?

Yeah, if there is no free buffer to work in, glXSwapBuffers will wait
until a buffer is free. It will not start drawing in a buffer that has
not been shown. The funny thing is that the definition of “shown” used
by most device driver writer does not require that the human being
every actually see the buffer. How else could glxgears claim to render
at 1800 FPS (as it does on my machine) when the video display is
updated 60 FPS?

My ex-Discreet Logic co-worker
tells me that it usually either blocks if you do a second one before
the first one is finished, or the next drawing command after the
glXSwapBuffers will block, preventing client state from overrunning
server state (or, often, hardware state) by too much.

Yep, that is correct.

In X, there’s no such throttling, other than the size/length of the
queue itself, so you have to provide it yourself. You can throw a
lot of XCopyArea at the X server in a second, giving it work for
many seconds to come.

But…

If you want to ensure that you do not queue up to many operations
there are ways to do that. The easiest is to use XSendEvent to send a
fake event. Before sending it you increment a counter and after
sending it you XFlush. When the fake event comes back to the event
processing code you decrement the counter and discard the event. If
you find that the counter is higher than you like, say 1, 2 or 3, you
can XSync which should result in a zero counter and then go on. That
would do what you want, without such frequent calls to XSync. It would
also work the pretty much the way you want from using XCB’s version of
XSync. If you set you limit to 1 then you could have one set of
instructions running on the server while you are busy building a new
set on the client.

… this is an excellent trick, and I think should do very well, for a
game development library, anyway.

Thank you, I, of course, agree :slight_smile:

For the general case of “real” GUI applications, XCB is still better,
because there’s a number of functions that round-trip to the X server
while other things could be done (copy/paste or drag and drop, say),
and you still end up having silly slow downs. Only a few years back,
the main thing making Firefox start really really slowly over a long
distance network was a long series of XQueryExtension, for example,
which could very well have been all sent together, then all received
together (which the XCB API allows).

I really like XCB. It seems very natural.

But in SDL, there probably isn’t enough things like that (how often do
we really do

XCB provides a fully generic way of accomplishing this, by simply
making no function be blocking, and cutting them all in send and
receive phases. But since my main problem with SDL is that none of
what it does is blocking (except for this XSync that’s being called
all the time), which is kind of the opposite. A “real” GUI application
uses a much wider range of X commands, so the usefulness of XCB would
be much more important, but for SDL, this should do.

We’d just need to get rid of that XSync in SDL, then (I think the
others are all during the setup of MITSHM stuff, sounds okay to me).

XShmPutImage has an option to request that an event be sent to tell
you when it is safe to touch the image again. So, you have the option
of either XSync-ing to make sure the operations are done or to wait
for an event.

Right, I remember now that I didn’t use it because I didn’t think I
had an equivalent for the non-MITSHM case, but I was mistaken.

Take care, this was a fun and informative discussion.

Bob PendletonOn Tue, Jun 30, 2009 at 5:48 PM, Pierre Phaneuf wrote:

On Tue, Jun 30, 2009 at 4:34 PM, Bob Pendleton<@Bob_Pendleton> wrote:


http://www.google.com/profiles/pphaneuf


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

Take care, this was a fun and informative discussion.

I agree. But…

One last detail: does anyone know why there’s a XSync with the
"discard" flag set to true? This could randomly lose some events when
creating a streaming texture?!? Betting that XSync(True) calls are a
bug is usually a winning bet… :wink:

… I still worry about that guy… Anyone know what it’s for?
Creating windows isn’t too frequent (and most SDL apps are still of
the single-window variety), which I’m guessing is the only reason
things haven’t been going wrong. You’d get the weird key stuck if you
happened to press it while a window was created, say, or the focus
would be incorrect, the kind of things I suspect people just shrug off
as yet another example of the flakiness that afflicts them every
day…

If nobody has any idea, I’d suggest applying this:

Index: src/video/x11/SDL_x11render.cOn Wed, Jul 1, 2009 at 5:29 PM, Bob Pendleton wrote:

— src/video/x11/SDL_x11render.c (revision 4597)
+++ src/video/x11/SDL_x11render.c (working copy)
@@ -359,7 +359,7 @@
shm_error = False;
X_handler = XSetErrorHandler(shm_errhandler);
XShmAttach(renderdata->display, shminfo);

  •                XSync(renderdata->display, True);
    
  •                XSync(renderdata->display, False);
                   XSetErrorHandler(X_handler);
                   if (shm_error) {
                       shmdt(shminfo->shmaddr);