OpenGL or pure SDL gfx?

I may be wrong, it’s been a while since I did this, but you should use the SDL_SetColorKey BEFORE converting the surface to the display format, so your code should look like this:

levelbg_bmp = SDL_LoadBMP(“image.bmp”);
SDL_SetColorKey(levelbg_bmp,
SDL_SRCCOLORKEY|SDL_RLEACCEL,
SDL_MapRGB(levelbg->format,255,0,255));
levelbg = SDL_DisplayFormat(levelbg_bmp);
SDL_FreeSurface(levelbg_bmp);

Also note that if your image is REALLY a background, ie, there nothing behind it, you SHOULD NOT use a color key on it, even on hardware accelerated surfaces it will take a lot of processing power and make your scrool looks bad. It’s good practice, if you have multi-layered scroll, to have two surfaces, one for colorkeyed blocks and other for opaque blocks, so you waste no processing power.

Another point, at least on Windows it goes like that, you may not need to use SDL_Delay to “wait”. When using SDL_Flip on a fullscreen hardware double buffered surface, it will wait for the vertical retrace and hence give plenty of time for other apps to do their job. Of course, windowed games doesn’t have access to that (HINT: if your game is running on a window, it WILL look worse than in hardware full screen in Windows). Of course, it’s good practice to check on which situation you are:

if(!fullscreen_hw_surface)
SDL_Delay(10);

It can be made better by trying to calculate the time spent to draw the screen and force a fixed frame rate. For 50fps (1000/50=20), it will be like that:

while(playing)
{
Uint32 start = SDL_GetTicks();

// Draw everything to screen and flips it

Uint32 end=SDL_GetTicks();
if( end-start<20)
SDL_Delay(20-(end-start));
}

Another good practice, if you want your animation looks the same no when you can’t achieve a constant frame rate, is to keep track of time when animating elements, not track of frame numbers. It’s a bit more complicated when you have to pause and you have to consider a maximum time slip for you animation until you start to slow it down to make your game playable.

Well, if something is confusing, let me know (also, if I confused something, let me also know :P).

Paulo

Now days you are never the only process running,

Anyway, has anyone tried to play with SCHED_RR or SCHED_FIFO scheduling
priorities on Linux? (Windows might also offer ways to bias scheduler?)
I wonder whether this could help you get the CPU on time …

latimeriusOn Sun, Sep 14, 2003 at 05:34:06PM -0500, Bob Pendleton wrote:

I may be wrong, it’s been a while since I did this, but you should use the
SDL_SetColorKey BEFORE converting the surface to the display format, so your

Tried it, but it doesn’t make much difference, if any. At least nothing I can see. The scrolling is about the same. Thanks for the tip about colorkey anyway, I’ll try and remember it.

As for delaying or not. If I choose to delay between screen updates then scrolling gets worse. If I try to fix the frame rate, the scrolling gets worse. I don’t know if this is because my video driver is “good”, but at least I shouldn’t do it, it seems.
I’m not able to get a hw-surface (thanks for pointing out the error in the code for detecting that steve) even though I try. Not a doublebuffer either. I use nvidias own driver for linux, but I guess it doesn’t help much (seems to work with other peoples GL apps though, like chromium).

Regards
Henning

Now days you are never the only process running,

Anyway, has anyone tried to play with SCHED_RR or SCHED_FIFO scheduling
priorities on Linux? (Windows might also offer ways to bias scheduler?)
I wonder whether this could help you get the CPU on time …

The next version of the kernel is supposed to have much finer scheduling
granularity and a much improved scheduler. May be best to just wait for
it.

	Bob PendletonOn Tue, 2003-09-16 at 11:14, Latimerius wrote:

On Sun, Sep 14, 2003 at 05:34:06PM -0500, Bob Pendleton wrote:

latimerius


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±----------------------------------+

I had a similar problem getting a hardware surface on an nvidia card,
until I set SDL_VIDEODRIVER=dga in the environment and ran the program as
root. In my case, double-buffering only made things much slower though
because most of my drawing was already done in software-favoring ways.

RyanOn Tue, 16 Sep 2003, Henning wrote:

I may be wrong, it’s been a while since I did this, but you should use the
SDL_SetColorKey BEFORE converting the surface to the display format, so your

Tried it, but it doesn’t make much difference, if any. At least nothing I can see. The scrolling is about the same. Thanks for the tip about colorkey anyway, I’ll try and remember it.

As for delaying or not. If I choose to delay between screen updates then scrolling gets worse. If I try to fix the frame rate, the scrolling gets worse. I don’t know if this is because my video driver is “good”, but at least I shouldn’t do it, it seems.
I’m not able to get a hw-surface (thanks for pointing out the error in the code for detecting that steve) even though I try. Not a doublebuffer either. I use nvidias own driver for linux, but I guess it doesn’t help much (seems to work with other peoples GL apps though, like chromium).

Regards
Henning


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

We do an SDL_Delay(1) in StepMania, in Windows only. 10 is extreme; it
means it’s impossible to even lock at 100FPS, and if you’re a 2D game,
you should usually be able to lock at the refresh rate, even on a
800x600 @120Hz screen.On Tue, Sep 16, 2003 at 01:10:41PM +0200, Olof Bjarnason wrote:

One tip: add a line

SDL_Delay(10);


Glenn Maynard

Some enemies are spawned during the game, while others are spawned
whenever the player gets a new ship. (It’s pretty much only the state
of the bases that survives a player death. The rest is
reinitialized.) These bombs belong to the latter category - and yes,
that means you’ll have to start all over with the “bomb cleaning” if
you lose a ship. hehe

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 16 September 2003 16.36, Milan Babuskov wrote:

David Olofson wrote:

Well, here’s one addict :wink: . I’m still struggling to get past
that darn 50th level…

You might try being very defensive at first, getting rid of most
of the blue bombs, before you start working on the base.

I thought their supply is infinite, so I can get rid of them until
the level is over ?!?!

[…]

The problem is that you don’t know what the users drivers are doing
and the user most likely does not know what they are doing either.
Then there is the problem of drivers that synch by busy waiting so
you don’t get any value out the synching anyway…

So, run a tiny test loop during initialization and use the delay()
call if the frame rate is greater than 100 FPS or so. Not what you
want to do, but the best you can do in an uncertain world.

Yeah - but you can always leave manual selection of “sync mode” as an
option for advanced users. (Make the default a safe option,
autodetection or whatever is most likely to work “ok” everywhere.)

It’s pretty annoying with applications that run with suboptimal
performance just because of dirty hacks you can’t disable…

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 16 September 2003 16.39, Bob Pendleton wrote:

It sure does, especially if you have a preemptive and/or lowlatency
patched kernel. (These are mostly used by music/audio folks.)

However, these require root (or corresponding capabilities for RT
scheduling), and they make it possible to freeze the system
completel, just by not giving away the CPU. So, unless your code is
100% bug free, throw in a watchdog thread or something. :slight_smile:

Anyway, if you want to use this for video stuff, there is a problem:
The whole “chain” must be real time safe, or you’ll end up with no
major improvement at best, or a frozen system at worst. Unless you’re
doing s/w rendering directly into a VRAM (DGA, svgalib or fbdev),
you’ll probably have to run more things than your application as
SCHED_FIFO or SCHED_RR; most likely the X server.

I’ve managed to get XFree86 to run as SCHED_FIFO, and others have too
AFAIK. However, it seems like certain applications occasionally make
X deadlock or something if you do this. At least, older versions of
KDE didn’t mix well with XFree86 running as SCHED_FIFO.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 16 September 2003 18.14, Latimerius wrote:

On Sun, Sep 14, 2003 at 05:34:06PM -0500, Bob Pendleton wrote:

Now days you are never the only process running,

Anyway, has anyone tried to play with SCHED_RR or SCHED_FIFO
scheduling priorities on Linux? (Windows might also offer ways to
bias scheduler?) I wonder whether this could help you get the CPU
on time …

I can attest to that. I had numerous problems with 2.4 and an SDL based
emulator I’m working on. Basically the only way to get a constant
framerate (60 fps) was with a busy-wait, taking 100% of the CPU. I added
some code to get an average of 60 fps, and while it reduced CPU usage
to less than 10%, the updates weren’t smooth anymore.

Now with kernel 2.6.0-test5, its actually much better to use the average
method. The busy-wait method causes the application to lose CPU time
(scheduled as a CPU hog), while the average method is virtually
indistinguishable from the 2.4 busy-wait. But of course the CPU usage
never goes above 5%. YMMV.

Now if I could only figure out why SDL_AUDIODRIVER=alsa causes CPU spikes
that make it practically useless, while OSS (under ALSA) emulation works
fine …

SteveOn Tuesday 16 September 2003 02:46 pm, Bob Pendleton wrote:

On Tue, 2003-09-16 at 11:14, Latimerius wrote:

On Sun, Sep 14, 2003 at 05:34:06PM -0500, Bob Pendleton wrote:

Now days you are never the only process running,

Anyway, has anyone tried to play with SCHED_RR or SCHED_FIFO
scheduling priorities on Linux? (Windows might also offer ways to
bias scheduler?) I wonder whether this could help you get the CPU on
time …

The next version of the kernel is supposed to have much finer
scheduling granularity and a much improved scheduler. May be best to
just wait for it.

  Bob Pendleton

I’ve been running Linux 2.6.0-test3 for a while now (mainly because of the
scheduling changes) and although the granularity went down by a factor of ten
(from 10ms down to 1ms) there seem to be situations in which the scheduler
does not behave quite as nicely as I would like. Sometimes it seemed to
"neglect" processes repeatably for notciable amounts of time, which resulted
in things like rythmically jerky graphics rendering. I haven’t investigated
this further and it might be fixed now.

I have not yet happened to stumble across a situation where the higher
scheduler frequency would make a noticable difference, though.

Regards,
GregorAm Dienstag, 16. September 2003 19:16 schrieb Bob Pendleton:

The next version of the kernel is supposed to have much finer scheduling
granularity and a much improved scheduler. May be best to just wait for
it.

  Bob Pendleton

I’ve played around with the older preemptive and lowlatency patches,
and they do indeed improve the situation a bit, even for normal
applications. The whole system just gets generally more responsive.

Still, if you want applications “firm” or hard real time, there is no
avoiding SCHED_FIFO, regardless of improvements like these. Real time
scheduling means that applications play by quite different rules, and
those rules just don’t work for normal applications in a “general
purpose” OS. Most importantly, applications that use lots of CPU
power always get lower priority, which is not acceptable if they’re
real time threads.

It’s definitely possible to make games and whatever drivers or
services they might need hard real time, but I doubt it will happen
on your average desktop system any time soon. XFree86 and the like
are far from being well behaved real time applications, and libraries
seem to rely on dynamic memory management and other services that are
inherently RT unsafe in a general purpose OS.

It’ll probably work “well enough” in a not too distant future, but
smooth scrolling without dropped frames while you’re compiling some
code, burning a CD and compacting your mail folders - that won’t
happen any time soon, I’m afraid. We’ve done that sort of stuff with
low latency audio (below 5 ms) for quite some time, but it’s more
complicated with video.

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 16 September 2003 19.16, Bob Pendleton wrote:

On Tue, 2003-09-16 at 11:14, Latimerius wrote:

On Sun, Sep 14, 2003 at 05:34:06PM -0500, Bob Pendleton wrote:

Now days you are never the only process running,

Anyway, has anyone tried to play with SCHED_RR or SCHED_FIFO
scheduling priorities on Linux? (Windows might also offer ways
to bias scheduler?) I wonder whether this could help you get the
CPU on time …

The next version of the kernel is supposed to have much finer
scheduling granularity and a much improved scheduler. May be best
to just wait for it.

I’m not able to get a hw-surface (thanks for pointing out the error in
the code for detecting that steve) even though I try. Not a
doublebuffer either.

I can get a hardware, double-buffered surface on my Radeon 7500 (linux
2.4.19, XFree 4.2.0, SDL 1.2.5, SDL_VIDEODRIVER=dga - running as root)
but even then, the test program is not scrolling smoothly.

latimeriusOn Tue, Sep 16, 2003 at 06:59:50PM +0200, Henning wrote:

Henning,

your program runs/scrolls smoothly on my P4 2Ghz, 256RAM, W2K System and colorkeying
works. Even if I use sw-surf (no fullscr. and doublebuf) it runs smoothly.

So maybe its a problem with your PC/os/gfx-card combination and you should check this first
(unless you have a very slow CPU/gfx-card), maybe do a search in the FAQ/archives.
I am not using Linux by myself but remembering frequent posts about performance problems under
Linux.

Nevertheless, moving a 1024x768 gfx in 32bit resolution with keycolor is quite
a big job for a CPU/gfx-card.
Maybe reduce to 16bit color depth and draw only the part of the gfx needed.

SDL_Delay: I never used it, but I am only developing for Win32.

For the main game loop I am using “frame rate independent movement”, not perfect,
but was sufficient for me. (look in the archives or google).

P.S. better to free ALL surfaces you create (level_bg) and check ALL return vals
of called functions.

Thomas

David Olofson wrote:

Some enemies are spawned during the game, while others are spawned
whenever the player gets a new ship. (It’s pretty much only the state
of the bases that survives a player death. The rest is
reinitialized.) These bombs belong to the latter category - and yes,
that means you’ll have to start all over with the “bomb cleaning” if
you lose a ship. hehe

Thanx. Since this is really OT, I will mail you directly if I have some
other questions…–
Milan Babuskov
http://fbexport.sourceforge.net

Nevertheless, moving a 1024x768 gfx in 32bit resolution with keycolor is quite
a big job for a CPU/gfx-card.
Maybe reduce to 16bit color depth and draw only the part of the gfx needed.

Yes, reducing colour depth is a good idea, but there’s not much difference on the performance no matter if I use 16 or 32bpp. Now, 24 on the other hand, that’s slower.
I was of the impression however, that I only draw what’s needed. If not, then what am I doing wrong? I use a 640x400 area for the scrolling image (the last 80 pixels will be used for different statistics), so I have to draw that whole area.

SDL_Delay: I never used it, but I am only developing for Win32.

P.S. better to free ALL surfaces you create (level_bg) and check ALL return vals
of called functions.

Yes, but this was only a test with as much as possible removed for ease of reading.

Regards
Henning

Yes, reducing colour depth is a good idea, but there’s not much difference on the performance no matter if I use 16 or 32bpp. Now, 24 on the other hand, that’s slower.

Most graphic cards uses 32 bit color modes (24 color bits with an alpha chanel), not 24 bit modes. It’s also faster to index and address memory in multiples of 4 (32 bits) or 2 (16 bits), instead of multiples of three (24 bits). If you’re using software surfaces, they are converted to 32 bits during the blit, which makes them slower. 32 bits and 16 bits blits to the same color depth are faster because the data is copied directly from the system memory to the video card, as no conversion is needed.

Paulo

Henning wrote:

I want it to be just a 2D game, but with a scrolling background. I’ve
read a lot about SDL and GL and linux game programming in general
now, and I’m still not able to get really smooth scrolling without
resorting to openGL (I guess it has to do with GL supporting my
hardware while SDL does not?). But I don’t really want any 3d effects
or anything, so I think it’s perhaps a little “overkill” to use GL
just to get smoother scrolling (also I’m puzzled by the fact that my
1,5ghz PC can’t produce smooth scrolling, when it was not much of a
problem on one of those good ol 7,14mhz (iirc) computers).

There are significant compatibility issues with OpenGL, but it also gets you
better results. I would (and do) support both. Ironically my own
development computer is one of those with OpenGL issues, so I have to borrow
a friend’s computer to test the OpenGL rendering while using the SDL2D
version locally. The difference in rendering quality is significant, even
for a purely 2D game.–
Rainer Deyke - rainerd at eldwood.com - http://eldwood.com

pvwr at sympatico.ca wrote:

Also note that if your image is REALLY a background, ie, there
nothing behind it, you SHOULD NOT use a color key on it, even on
hardware accelerated surfaces it will take a lot of processing power
and make your scrool looks bad. It’s good practice, if you have
multi-layered scroll, to have two surfaces, one for colorkeyed blocks
and other for opaque blocks, so you waste no processing power.

Actually, if you’re using tiles, you should use one surface per tile, at
least for colorkeyed tiles without hardware acceleration. The speed of RLE
accelerated blits depends on the size of the source surface, not the size of
the area actually blitted.–
Rainer Deyke - rainerd at eldwood.com - http://eldwood.com