[…]
What OS?
Linux
and there’s no 3D acceleration on my video card.
Ok.
I feel confuse now,
I want to play with ray casting in 640x480 16bpp, what I will do is :
2D texture mapping
Sprite blitting
and I’ve this data :
If I use a software surface, I’ll be fast for accessing pixel but slow
for clipping and for copying it into vram.
If I use a hardware surface, I’ll be slow for accessing pixel directly
but fast for clipping.
Hmm… I assume you meant “blitting”; not “clipping”. If so, yeah,
that’s basically it.
So I guess I should do the texture mapping with software surface, copy
the surface in vram then blitting sprites in.
Probably, but it’s probably a good idea to check if blits from
SDL_DisplayFormat()ed surfaces to the screen actually are h/w
accelerated.
If they aren’t, you’re better off doing all rendering into the software
back buffer. OTOH, it seems like you’ll effectively be doing that anyway
on X, with the possible exception of DGA 2 when running as root. Not
sure about that, though, and of course, it differs between X versions,
video cards and configurations.
According that moving software surface to the video card use 60% of my
cpu power, I bet I’ll get poor animation when I’ll do something else
that updating an empty surface.
No. You’re already taking the hit when performing the software->screen
blit. Rendering into the software buffer is where you have the raw,
unrestricted CPU power at your disposal - and you have 40% of the CPU
power left per frame, which should be enough to fill the screen several
times over, on a PC133 or RDRAM system.
I’m sure to miss a point : with a 600Mhz cpu I’ll be slower than a 486
doing the same thing ( wolfenstein 3D ). ( With a double sized
resolution : 640x480 ).
Well, sure, we have a serious performance issue in the system->VRAM
blitting, but don’t be unfair here:
* You're pumping four times as many pixels, each taking two
bytes instead on one. That's 8 times more data.
* Depending on what you do, you have 15-40 times the CPU power,
so even without this blitting issue, you're not getting away
with sloppy code, especially not if you're going to do more
advanced rendering than the Wolfenstein 3D engine.
Now, 60% of the CPU power is still a big loss, but put in relation to how
much more work you’re actually going to do, it might not seem that bad.
What is wrong with the design ?
Nothing, really. It doesn’t get more fun than that, “thanks” to the
crappy design of current video cards. :-/
However, you may try to avoid using software surfaces entirely,
allowing SDL to use h/w accelerated VRAM->VRAM blitting, which is
available on several targets. And then there’s OpenGL…
And don’t even think about rendering directly to VRAM for full screen
animation! That will guarantee poor performance on virtually all
targets. heh
I feel like if someone told me that Santa Claus doesn’t exist :-/
Yeah, I know that feeling… :-/
Thanks for you answer David, it’ll take more time that I thought to
have something that play smoothly, but I don’t give up.
It’s definitely possible. For example, ZDoom is very playable on Linux in
640x480 resolutions.
But still, the Windows version is running cicles around the Linux version
at the same resolution… It will be even “worse” with the new version,
as I made Randy aware of how slow CPU writing to VRAM really is. After
changing to DMA blits with DirectX, he managed to get some 35 fps at
1400x1050! (Note that ZDoom still works with 8 bpp, and tries to get that
screen pixel format whenever possible.)
//David Olofson — Programmer, Reologica Instruments AB
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |
-------------------------------------> http://olofson.net -'On Wednesday 31 October 2001 18:54, mewn wrote: