Stephane Peter wrote:
I know the internal blitter of my video card is a magnitude faster
than what I get now! Now, how do I get to use it???Well, the fact is that the X server is the only software responsible for it.
Only the X server decides how to accelerate things, and there is no way that
I know of in the current X11 spec that allow a client to force the server to
use acceleration (X11 was not designed with games in mind!).
Yes, I know there is no explicit “put this in video memory” flag, but I
am trying to know what are the requirements that need to be satisfy so
that XFree86 will use put a pixmap in video memory (I know it could get
kicked out and so on, but I guess that if I use it madly, it should stay
there for a bit)…
I was thinking of creating an X11 extension that would give
DirectX-style control, but I’m not looking very seriously into it
(hacking and recompiling the X server sounds awful, but dynamically
loadable modules in XFree86 4.0 could make this much easier to hack)…
But DGA 2.0 could help (but is only fullscreen).
Although most if not all X11 drivers (especially in XFree86) at least have
blits and rectangles hardware accelerated (else the performance would be
utterly slow, try the framebuffer only X server to get an idea)…
Window to Window XCopyArea and XFillRectangle in Window are definitely
accelerated (100-something megabytes per second and up), both on S3
ViRGE and Matrox G200.
I really have to try setting a pixmap as the background of a window and
use XClearArea, I’ve heard good things about that.
We achieved extraordinary results with some native DirectX tests by
careful hardware usage. Our way of using the hardware resulted in a very
unusual 2D library with features like memory management that are more
often found in 3D libraries than in 2D ones. For example, where most 2D
libraries give you access to the pixel data of surfaces, that we could
instead optimize better by disallowing surface access, having the
library user instead “upload” the surface data, similar to texture
management in OpenGL.This mostly works the same way in X11, although you don’t have any control
over this (which is a shame for game developers!). However with a bit of
knowledge of X servers work (and mostly XFree in this case), you can arrange
things so that you increase the chances that hardware acceleration will be
used (using Pixmaps that may be stored in video memory for instance)…Doing things like DirectX is almost impossible in X, because this is just
plain dirty and in contradiction with what X11 stands for. The only solution
would be to use a new X11 extension, something like DGA 2.0 …
No explicit control could do, if at least it can happen sometimes!
Explicit control would be much better, that’s true, but would also be
contrary to regular X philosophy.
I tried using Pixmaps of various sizes (my 3.3.5 XFree86 advertise that
it reserves 9 128x128 areas for pixmap caching, so I tried 640x480,
128x128 and 127x127 pixmaps), to no avail.
DGA 2.0 sounds much better. I expect that the DGACopyArea in it are
hardware accelerated (though I do not exactly understand how you would
address off-screen video memory) and it even has a colorkey blit (YES!).
But it requires a lot of stuff, like being fullscreen and all sort of
permission crappiness…
I do not want direct access to the framebuffer, and the accelerated
functions I want should work both in windowed and fullscreen mode, so
while I appreciate being able to switch to fullscreen (which I can also
do with xf86vmode and an override_redirect window), this is a lot to
pay.
Is there any way to tell that a pixmap has been cached in hardware and
will be HW accelerated in blitting? Is there any way to request a
pixmap that must reside in acceleratable video memory?Nope. I can tell you under what conditions XAA will currently
put a pixmap in offscreen memory though. If it has an area
larger than 64000 pixels and there’s room for it (and provided
the driver has allowed this) it will get stuck in offscreen
memory. It can get kicked out by the server at any time to
make room for something else though.
Hmm… Is this about 3.3.x or 3.9.x? Raster told me that my benchmark
test kicked ass with 3.9.16, but did suck with 3.3.x (with a pixmap of
640x480, well over 64000 pixels, and I have 8 megs of video memory,
should fit).
That’s assuming you’re talking about using Pixmaps for
back buffers. Pixmaps used as GC tiles are handled a little
differently.
“differently”? How so exactly? I am currently using a Pixmap as a back
buffer (I XCopyArea a 640x480 pixmap to a 640x480 window).–
Pierre Phaneuf
Ludus Design, http://ludusdesign.com/
“First they ignore you. Then they laugh at you.
Then they fight you. Then you win.” – Gandhi