An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20030501/980cb3e7/attachment.txt
Hi!
Sorry for bringing this up again, but i am not sure that i understood the conclusions of the discussion about hardware surfaces under X11, linux.My question is basically: can i never ever get a hardware surface unless i use openGL and a graphics card with gl dirvers that give me this? Or can i actually have drivers for X that gives me hardware surfaces even for 2d programs?
You CAN get a hardware surface under X. There are a few hoops you have
to jump through to get them but you can get them. In fact, they work
very well once you have one. The kind of acceleration available for them
varies from driver to driver. I’m currently using the NVidia drivers,
and SDL claims that there is some hardware acceleration. The video info
for my card when set to a hardware buffer mode looks like:
SDL_VideoInfo
hw_available 1
wm_available 0
blit_hw 1
blit_hw_CC 0
blit_hw_A 0
blit_sw 0
blit_sw_CC 0
blit_sw_A 0
blit_fill 1
video_mem 32576
They are not as well supported as they could be, but that is a basic
problem with X, not with SDL.
Bob PendletonOn Thu, 2003-05-01 at 04:39, Anders Folkesson wrote:
thanks
–Anders Folkesson
Get your free web-based e-mail account from http://www.Math.net
Your online tourguide of Mathematics, with books, links, news,
message boards, and much more!
Select your own custom email address for FREE! Get you at yourchoice.com w/No Ads, 6MB, POP & more! http://www.everyone.net/selectmail?campaign=tag
SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl
–
±----------------------------------+
- Bob Pendleton: independent writer +
- and programmer. +
- email: Bob at Pendleton.com +
±----------------------------------+
Bob Pendleton wrote:
Anders Folkesson wrote:
My question is basically: can i never ever get a hardware surface
unless i use openGL and a graphics card with gl dirvers that give
me this? Or can i actually have drivers for X that gives me
hardware surfaces even for 2d programs?You CAN get a hardware surface under X. There are a few hoops you
have to jump through to get them but you can get them. In fact, they
work very well once you have one. The kind of acceleration available
for them varies from driver to driver. I’m currently using the
NVidia drivers,
But are you seeing any performance benefit? My experience is that
normal CPU writes across the AGP bus are a full two orders of
magnitude (!) slower than writes to AGP memory that are then DMA’d to
the card. Mind you, I’m not using SDL surfaces but OpenGL vertex
arrays inside memory allocated by the NV_vertex_array_range extension.
But I rather doubt that the card and chipset can tell the difference.
Writing directly to a Graphics card is just a bad idea in the modern
world. The Right Thing is to set up a temporary buffer in uncached
AGP memory and then DMA this to the card in one chunk. I don’t
believe that any 2D API’s exist for such functionality.
Even OpenGL allows this only for vertex data – mutable AGP/DMAable
framebuffers for pixel data are a planned feature for the
ARB_?ber_buffer extension (or whatever the final name will be). With
OpenGL, you’ll probably see the best “CPU drawing” performance by
drawing to regular memory and then sending it to the card as either a
texture or via glDrawPixels().
Andy–
Andrew J. Ross Beyond the Ordinary Plausibility Productions
Sole Proprietor Beneath the Infinite Hillsboro, OR
Experience… the Plausible?
Bob Pendleton wrote:
Anders Folkesson wrote:
My question is basically: can i never ever get a hardware surface
unless i use openGL and a graphics card with gl dirvers that give
me this? Or can i actually have drivers for X that gives me
hardware surfaces even for 2d programs?You CAN get a hardware surface under X. There are a few hoops you
have to jump through to get them but you can get them. In fact, they
work very well once you have one. The kind of acceleration available
for them varies from driver to driver. I’m currently using the
NVidia drivers,But are you seeing any performance benefit?
That wasn’t the question the person asked.
My experience is that
normal CPU writes across the AGP bus are a full two orders of
magnitude (!) slower than writes to AGP memory that are then DMA’d to
the card.
This has always been the case. It was true with the ISA bus, it was true
with the Vesa bus, it was true with the PCI bus… It was true long
before the PC was invented.
It looks to me that on some drivers you get hardware accelerated blits
and fills for hardware surfaces, so you might sometimes get a
performance enhancement from putting all your sprites in hardware
buffers and blitting them to the display surface. And, you might get
another performance enhancement from hardware fills for clearing the
back buffer.
OTOH, a blit dominated sprite based game will very likely get a
performance boost from using hardware buffers and dirty rectangles
because they aren’t blitting large areas of the screen across the bus
and they should get a more consistent frame rate from using hardware
page flipping.
Mind you, I’m not using SDL surfaces but OpenGL vertex
arrays inside memory allocated by the NV_vertex_array_range extension.
But I rather doubt that the card and chipset can tell the difference.Writing directly to a Graphics card is just a bad idea in the modern
world. The Right Thing is to set up a temporary buffer in uncached
AGP memory and then DMA this to the card in one chunk. I don’t
believe that any 2D API’s exist for such functionality.Even OpenGL allows this only for vertex data – mutable AGP/DMAable
framebuffers for pixel data are a planned feature for the
ARB_?ber_buffer extension (or whatever the final name will be). With
OpenGL, you’ll probably see the best “CPU drawing” performance by
drawing to regular memory and then sending it to the card as either a
texture or via glDrawPixels().
The whole point is that OpenGL is not supported on all platforms that
SDL runs on. If all depends on what you want to run on and it all
depends on the needs of your game/application. If it is fast enough
using hardware buffers and you get other benefits from using them, like
smoother animation or shorter developement times, or a wider ranges of
target platforms, then hardware buffers make perfect sense.
Bob PendletonOn Thu, 2003-05-01 at 10:43, Andy Ross wrote:
Andy
–
±----------------------------------+
- Bob Pendleton: independent writer +
- and programmer. +
- email: Bob at Pendleton.com +
±----------------------------------+