Looking for advices

Hello,

I’m starting some new code with SDL, and I wanted to test my little code design.
my code is :

normaltime=1000/30;
video_init();
lasttime=SDL_GetTicks();
while(!done) {
raycast();
currenttime=SDL_GetTicks();
deltatime=currenttime-lasttime;
if (deltatime<normaltime) {
SDL_Delay(normaltime-deltatime);
lasttime=SDL_GetTicks();
} else lasttime=currenttime;
update_screen();
wait_event();
}

with update_screen defined as :

int update_screen()
{
SDL_UpdateRect(screen,0,0,0,0);
return(0);
}

and my video_init :
int video_init(void)
{
SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER);
printf(“start init\n”);
screen=SDL_SetVideoMode(WIDTH,HEIGHT,16,SDL_SWSURFACE);
if(screen==NULL) {
printf(“Error SDL_Init\n”);
exit(0); }
}

I know it’s a stupid test, but well I did it. With that I used 60% of my cpu.
So now I’m afraid : I’m doing nothing and my code use nearly all my cpu.
My laptop is a 600Mhz, and well, I can’t say it’s SDL fault : sdlquake run
smoothly…

Let me feed you with some data : I’m using a laptop with a cheap video card (
neomagic 3Mb ). The futur of the code is to play with ray casting.

What is wrong with the design ?

I hope you can give me some advice or tips.–
Best regards
Laurent SIMON

argh…it was dasked 100s of times…do while(1); and it will burn 100% of
your cpu ;). if you don’t like this just add SDL_Delay(2); at the end of the
loop…
good luck,
–skinncode> Hello,

I’m starting some new code with SDL, and I wanted to test my little code
design. my code is :

normaltime=1000/30;
video_init();
lasttime=SDL_GetTicks();
while(!done) {
raycast();
currenttime=SDL_GetTicks();
deltatime=currenttime-lasttime;
if (deltatime<normaltime) {
SDL_Delay(normaltime-deltatime);
lasttime=SDL_GetTicks();
} else lasttime=currenttime;
update_screen();
wait_event();
}

with update_screen defined as :

int update_screen()
{
SDL_UpdateRect(screen,0,0,0,0);
return(0);
}

and my video_init :
int video_init(void)
{
SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER);
printf(“start init\n”);
screen=SDL_SetVideoMode(WIDTH,HEIGHT,16,SDL_SWSURFACE);
if(screen==NULL) {
printf(“Error SDL_Init\n”);
exit(0); }
}

I know it’s a stupid test, but well I did it. With that I used 60% of my
cpu. So now I’m afraid : I’m doing nothing and my code use nearly all my
cpu. My laptop is a 600Mhz, and well, I can’t say it’s SDL fault : sdlquake
run smoothly…

Let me feed you with some data : I’m using a laptop with a cheap video card
( neomagic 3Mb ). The futur of the code is to play with ray casting.

What is wrong with the design ?

I hope you can give me some advice or tips.


roots rock reggae

argh…it was dasked 100s of times…do while(1); and it will burn 100% of
your cpu ;). if you don’t like this just add SDL_Delay(2); at the end of the
loop…
good luck,
–skinncode

Thanks :wink:
But it’s what I do in my little code, I try to be at max 30fpsOn Tue, 30 Oct 2001 21:46:02 +0200 skinncode wrote:


Best regards
Laurent SIMON

[…]

with update_screen defined as :

int update_screen()
{
SDL_UpdateRect(screen,0,0,0,0);
return(0);
}

You’re definitely going to burn plenty of CPU time in SDL_UpdateRect()
while copying to VRAM, at least on platforms without busmaster DMA
blitting from system RAM to VRAM.

and my video_init :
int video_init(void)
{
SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER);
printf(“start init\n”);
screen=SDL_SetVideoMode(WIDTH,HEIGHT,16,SDL_SWSURFACE);

What’s the window size?

I know it’s a stupid test, but well I did it. With that I used 60% of
my cpu. So now I’m afraid : I’m doing nothing and my code use nearly
all my cpu.

You’re pumping quite a bit of data from a software surface into VRAM -
that’s hardly what I’d call “nothing”. :slight_smile:

My laptop is a 600Mhz, and well, I can’t say it’s SDL fault
: sdlquake run smoothly…

That’s probably because 1) it has a very fast software rasterizer, which
means it does just fine with the remaining 40% of CPU power, or 2) it’s
using OpenGL for rendering, which means you get around this VRAM access
issue completely, even on Linux. (All textures are in VRAM or texture
RAM, or at worst, in the AGP aperture, from which fast busmaster DMA
transfers are possible. Yeah, even that works on Linux as well! :wink:

Let me feed you with some data : I’m using a laptop with a cheap video
card ( neomagic 3Mb ). The futur of the code is to play with ray
casting.

What OS?

What is wrong with the design ?

Nothing, really. It doesn’t get more fun than that, “thanks” to the
crappy design of current video cards. :-/

However, you may try to avoid using software surfaces entirely, allowing
SDL to use h/w accelerated VRAM->VRAM blitting, which is available on
several targets. And then there’s OpenGL…

And don’t even think about rendering directly to VRAM for full screen
animation! That will guarantee poor performance on virtually all
targets. heh

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 30 October 2001 22:24, mewn wrote:

“David Olofson” <david.olofson at reologica.se> a ?crit dans le message news:
mailman.1004483114.2434.sdl at libsdl.org
[…]

with update_screen defined as :

Usualy, the best wait is with a semaphore:

int update=0;
whenever you are blitting, put
update=1;

then…
if (update)
{
update_screen();
update=0;
}

Hope that will help. Jocelyn.

int update_screen()
{
SDL_UpdateRect(screen,0,0,0,0);
return(0);
}

You’re definitely going to burn plenty of CPU time in SDL_UpdateRect()
while copying to VRAM, at least on platforms without busmaster DMA
blitting from system RAM to VRAM.

and my video_init :
int video_init(void)
{
SDL_Init(SDL_INIT_VIDEO|SDL_INIT_TIMER);
printf(“start init\n”);
screen=SDL_SetVideoMode(WIDTH,HEIGHT,16,SDL_SWSURFACE);

What’s the window size?

I know it’s a stupid test, but well I did it. With that I used 60% of
my cpu. So now I’m afraid : I’m doing nothing and my code use nearly
all my cpu.

You’re pumping quite a bit of data from a software surface into VRAM -
that’s hardly what I’d call “nothing”. :slight_smile:

My laptop is a 600Mhz, and well, I can’t say it’s SDL fault
: sdlquake run smoothly…

That’s probably because 1) it has a very fast software rasterizer, which
means it does just fine with the remaining 40% of CPU power, or 2) it’s
using OpenGL for rendering, which means you get around this VRAM access
issue completely, even on Linux. (All textures are in VRAM or texture
RAM, or at worst, in the AGP aperture, from which fast busmaster DMA
transfers are possible. Yeah, even that works on Linux as well! :wink:

Let me feed you with some data : I’m using a laptop with a cheap video
card ( neomagic 3Mb ). The futur of the code is to play with ray
casting.

What OS?

What is wrong with the design ?

Nothing, really. It doesn’t get more fun than that, “thanks” to the
crappy design of current video cards. :-/

However, you may try to avoid using software surfaces entirely, allowing
SDL to use h/w accelerated VRAM->VRAM blitting, which is available on
several targets. And then there’s OpenGL…

And don’t even think about rendering directly to VRAM for full screen
animation! That will guarantee poor performance on virtually all
targets. heh

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 30 October 2001 22:24, mewn wrote:

[…]
My laptop is a 600Mhz, and well, I can’t say it’s SDL fault
: sdlquake run smoothly…

That’s probably because 1) it has a very fast software rasterizer, which
means it does just fine with the remaining 40% of CPU power, or 2) it’s
using OpenGL for rendering, which means you get around this VRAM access
issue completely, even on Linux. (All textures are in VRAM or texture
RAM, or at worst, in the AGP aperture, from which fast busmaster DMA
transfers are possible. Yeah, even that works on Linux as well! :wink:

[…]
What OS?

Linux
and there’s no 3D acceleration on my video card.

I feel confuse now,
I want to play with ray casting in 640x480 16bpp, what I will do is :

2D texture mapping
Sprite blitting

and I’ve this data :
If I use a software surface, I’ll be fast for accessing pixel but slow for
clipping and for copying it into vram.
If I use a hardware surface, I’ll be slow for accessing pixel directly but fast
for clipping.

So I guess I should do the texture mapping with software surface, copy the
surface in vram then blitting sprites in.
According that moving software surface to the video card use 60% of my cpu power,
I bet I’ll get poor animation when I’ll do something else that updating an
empty surface.
I’m sure to miss a point : with a 600Mhz cpu I’ll be slower than a 486 doing the
same thing ( wolfenstein 3D ). ( With a double sized resolution : 640x480 ).

What is wrong with the design ?

Nothing, really. It doesn’t get more fun than that, “thanks” to the
crappy design of current video cards. :-/

However, you may try to avoid using software surfaces entirely, allowing
SDL to use h/w accelerated VRAM->VRAM blitting, which is available on
several targets. And then there’s OpenGL…

And don’t even think about rendering directly to VRAM for full screen
animation! That will guarantee poor performance on virtually all
targets. heh

I feel like if someone told me that Santa Claus doesn’t exist :-/

Thanks for you answer David, it’ll take more time that I thought to have
something that play smoothly, but I don’t give up.On Wed, 31 Oct 2001 00:06:12 +0100 David Olofson <david.olofson at reologica.se> wrote:


Best regards
Laurent SIMON

[…]

What OS?

Linux
and there’s no 3D acceleration on my video card.

Ok.

I feel confuse now,
I want to play with ray casting in 640x480 16bpp, what I will do is :

2D texture mapping
Sprite blitting

and I’ve this data :
If I use a software surface, I’ll be fast for accessing pixel but slow
for clipping and for copying it into vram.
If I use a hardware surface, I’ll be slow for accessing pixel directly
but fast for clipping.

Hmm… I assume you meant “blitting”; not “clipping”. :slight_smile: If so, yeah,
that’s basically it.

So I guess I should do the texture mapping with software surface, copy
the surface in vram then blitting sprites in.

Probably, but it’s probably a good idea to check if blits from
SDL_DisplayFormat()ed surfaces to the screen actually are h/w
accelerated.

If they aren’t, you’re better off doing all rendering into the software
back buffer. OTOH, it seems like you’ll effectively be doing that anyway
on X, with the possible exception of DGA 2 when running as root. Not
sure about that, though, and of course, it differs between X versions,
video cards and configurations.

According that moving software surface to the video card use 60% of my
cpu power, I bet I’ll get poor animation when I’ll do something else
that updating an empty surface.

No. You’re already taking the hit when performing the software->screen
blit. Rendering into the software buffer is where you have the raw,
unrestricted CPU power at your disposal - and you have 40% of the CPU
power left per frame, which should be enough to fill the screen several
times over, on a PC133 or RDRAM system.

I’m sure to miss a point : with a 600Mhz cpu I’ll be slower than a 486
doing the same thing ( wolfenstein 3D ). ( With a double sized
resolution : 640x480 ).

Well, sure, we have a serious performance issue in the system->VRAM
blitting, but don’t be unfair here:

* You're pumping four times as many pixels, each taking two
  bytes instead on one. That's 8 times more data.

* Depending on what you do, you have 15-40 times the CPU power,
  so even without this blitting issue, you're not getting away
  with sloppy code, especially not if you're going to do more
  advanced rendering than the Wolfenstein 3D engine.

Now, 60% of the CPU power is still a big loss, but put in relation to how
much more work you’re actually going to do, it might not seem that bad.

What is wrong with the design ?

Nothing, really. It doesn’t get more fun than that, “thanks” to the
crappy design of current video cards. :-/

However, you may try to avoid using software surfaces entirely,
allowing SDL to use h/w accelerated VRAM->VRAM blitting, which is
available on several targets. And then there’s OpenGL…

And don’t even think about rendering directly to VRAM for full screen
animation! That will guarantee poor performance on virtually all
targets. heh

I feel like if someone told me that Santa Claus doesn’t exist :-/

Yeah, I know that feeling… :-/

Thanks for you answer David, it’ll take more time that I thought to
have something that play smoothly, but I don’t give up.

It’s definitely possible. For example, ZDoom is very playable on Linux in
640x480 resolutions.

But still, the Windows version is running cicles around the Linux version
at the same resolution… It will be even “worse” with the new version,
as I made Randy aware of how slow CPU writing to VRAM really is. After
changing to DMA blits with DirectX, he managed to get some 35 fps at
1400x1050! (Note that ZDoom still works with 8 bpp, and tries to get that
screen pixel format whenever possible.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 31 October 2001 18:54, mewn wrote:

I feel like if someone told me that Santa Claus doesn’t exist :-/

Yeah, I know that feeling… :-/

“There is no magic. There are tricks.” - Nakor, the Blue Rider.
(ask any c64 coder =)

Thanks for you answer David, it’ll take more time that I thought
to have something that play smoothly, but I don’t give up.

It’s definitely possible. For example, ZDoom is very playable on
Linux in 640x480 resolutions.

But still, the Windows version is running cicles around the Linux
version at the same resolution… It will be even “worse” with the
new version, as I made Randy aware of how slow CPU writing to VRAM
really is. After changing to DMA blits with DirectX, he managed to
get some 35 fps at 1400x1050! (Note that ZDoom still works with 8
bpp, and tries to get that screen pixel format whenever possible.)

Sounds like someone should fix/redesign X … No hardware
pageflipping, no DMA … Linux is better at multitasking than Windows,
so if Win can do it, Lin should be able to, too!

Unless DMA blitting code is patented ?–
Trick


Linux User #229006 * http://counter.li.org

You should never bet against anything in science at odds of more than
about 10^12 to 1.
– Ernest Rutherford

Sounds like someone should fix/redesign X … No hardware
pageflipping, no DMA … Linux is better at multitasking than Windows,
so if Win can do it, Lin should be able to, too!

Unless DMA blitting code is patented ?

Yup, X could use a redesign. There is at least one project with that
goal, but X is huge. Duplicating all its functionality (How do you do
DMA blits over a socket? :P) is a massive task, and not likely to be
complete any time soon.On Thu, 1 Nov 2001, Trick wrote:

I feel like if someone told me that Santa Claus doesn’t exist :-/

Yeah, I know that feeling… :-/

“There is no magic. There are tricks.” - Nakor, the Blue Rider.
(ask any c64 coder =)

Yeah… But in this case - magic or tricks, or just plait code - it has
to be done in the drivers. (And SDL is not a driver, BTW…)

Thanks for you answer David, it’ll take more time that I thought
to have something that play smoothly, but I don’t give up.

It’s definitely possible. For example, ZDoom is very playable on
Linux in 640x480 resolutions.

But still, the Windows version is running cicles around the Linux
version at the same resolution… It will be even “worse” with the
new version, as I made Randy aware of how slow CPU writing to VRAM
really is. After changing to DMA blits with DirectX, he managed to
get some 35 fps at 1400x1050! (Note that ZDoom still works with 8
bpp, and tries to get that screen pixel format whenever possible.)

Sounds like someone should fix/redesign X … No hardware
pageflipping, no DMA … Linux is better at multitasking than Windows,
so if Win can do it, Lin should be able to, too!

From what I’ve heard (I’m not an XFree86 developer), there are design
problems with X that call for ugly hacks to support those features. DRI
is supposed to address these design issues, and indeed, most of the code
already seems to be in place. Although so far, all energy seems to have
been focused on getting OpenGL fast enough for serious gaming, and on
video overlays and other “narrow” features, while “normal” 2D features
have been left behind.

Unless DMA blitting code is patented ?

Well, patents (especially software patents) are a very, very evil and
stupid thing. Indeed, many rather obvious (hardware and software) designs
are in fact patented, which results in trouble for Free/Open Source
software developers. However, I don’t think there’s a problem in this
case. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 01 November 2001 04:14, Trick wrote: