Blit performance

Using SDL 1.2.2 I have a 1024x768 32bpp
X11 window opened by SDL_SetVideoMode, and
I blit a background 640x480 bmp onto it. To
cover the entire screen takes 6 to 9 blits,
depending where I “scroll” the backround
onto the screen.

When scrolling the background, the
6 -to- 9 blits take about 50 ms, and the
update rects also takes about another 50
ms. Thats 100 ms total; this gives me a
frame rate of less than 10 frame/sec,
usually something like 7.3 frame/sec.

I have no previous experience with this
sort of thing, but the frame rate that
I’m getting is depressingly slow. Is it
normal? Am I expecting too much? I do
get many thousands of frame/sec when
doing nothing (no blit and no update
rects).

Using the same X configuration I can play
Myth2 v1.3 in a 1024x768 window. I know
it was built with SDL, but I don’t know
how much it uses SDL. When I swing the
entire scene around in Myth2 it seems much
smoother than my scrolling background
program (although I don’t know what the
frame rate is when I swing the
camera all around).

This is on an SMP (SMP probably doesn’t
matter) 333 MHz PII running Linux 2.2.14
with Xfree86 3.3.6 and a 8 MB Matrox G200.–
Douglas Jerome

I am not an SDL guru, but I believe games using SDL
also use OpenGL. Beside modern 3d acceleration, vga
cards also have a hoard of 2d accel features perfect
for playing mpeg, scrolling etc. Sorry, just throwing
yet another two cents…> Using SDL 1.2.2 I have a 1024x768 32bpp

X11 window opened by SDL_SetVideoMode, and
I blit a background 640x480 bmp onto it. To
cover the entire screen takes 6 to 9 blits,
depending where I “scroll” the backround
onto the screen.

When scrolling the background, the
6 -to- 9 blits take about 50 ms, and the
update rects also takes about another 50
ms. Thats 100 ms total; this gives me a
frame rate of less than 10 frame/sec,
usually something like 7.3 frame/sec.

I have no previous experience with this
sort of thing, but the frame rate that
I’m getting is depressingly slow. Is it
normal? Am I expecting too much? I do
get many thousands of frame/sec when
doing nothing (no blit and no update
rects).

Using the same X configuration I can play
Myth2 v1.3 in a 1024x768 window. I know
it was built with SDL, but I don’t know
how much it uses SDL. When I swing the
entire scene around in Myth2 it seems much
smoother than my scrolling background
program (although I don’t know what the
frame rate is when I swing the
camera all around).

This is on an SMP (SMP probably doesn’t
matter) 333 MHz PII running Linux 2.2.14
with Xfree86 3.3.6 and a 8 MB Matrox G200.


Douglas Jerome
http://hackerlabs.sourceforge.net


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger

I am not an SDL guru, but I believe games using SDL
also use OpenGL. Beside modern 3d acceleration, vga
cards also have a hoard of 2d accel features perfect
for playing mpeg, scrolling etc. Sorry, just throwing
yet another two cents…

Games using SDL don’t have to use OpenGL; there were several games,
commercially and otherwise, using SDL before there was any OpenGL support
in the library.

I have no previous experience with this
sort of thing, but the frame rate that
I’m getting is depressingly slow. Is it
normal? Am I expecting too much? I do
get many thousands of frame/sec when
doing nothing (no blit and no update
rects).

It’s probably normal if you’re trying to blit so many 32-bit pixels with
each frame. Doubly so if your X server is in 16-bit mode, since SDL has to
convert each and every pixel to 16-bits each and every time you blit.

Using the same X configuration I can play
Myth2 v1.3 in a 1024x768 window. I know
it was built with SDL, but I don’t know

Myth2 is not in 32-bit color.

This is on an SMP (SMP probably doesn’t
matter) 333 MHz PII running Linux 2.2.14
with Xfree86 3.3.6 and a 8 MB Matrox G200.

Other notes: Xfree 4 is generally faster than Xfree 3. Kernel 2.4 has, at
least for me, given a significant framerate increase to certain SDL apps.
YMMV, and you should attack the real problem, which is the 32-bit thing.

–ryan.

Hi,On Sun, 09 Sep 2001 19:29:18 -0700 Doug Jerome wrote:

Using SDL 1.2.2 I have a 1024x768 32bpp
X11 window opened by SDL_SetVideoMode, and
I blit a background 640x480 bmp onto it. To
cover the entire screen takes 6 to 9 blits,
depending where I “scroll” the backround
onto the screen.

Did you do a

my_bmp = SDL_DisplayFormat( my_bmp );

directly after loading the bmp?
It really speeds things up.

Have also a look at SDL_ConvertSurface and SDL_DisplayFormatAlpha
in the manual.

HTH,
Willi

Kernel 2.4 has seen some amazing speed increases… I don’t know why or how,
but I have noticed significant results as well.

The XFree 4.x item is important for Matrox (I believe), which is what he
has… But there are some cards which you will get /much/ better performance
under XFree 3.3.6 (such as older 3Dfx cards [my Voodoo 2 :-/ ], and some
other misc. cards like some ATI’s).

When thinking of optimization, best to check the support at
http://www.xfree86.orgOn Monday 10 September 2001 2:39am, Ryan C. Gordon wrote:

Using the same X configuration I can play
Myth2 v1.3 in a 1024x768 window. I know
it was built with SDL, but I don’t know

Myth2 is not in 32-bit color.

This is on an SMP (SMP probably doesn’t
matter) 333 MHz PII running Linux 2.2.14
with Xfree86 3.3.6 and a 8 MB Matrox G200.

Other notes: Xfree 4 is generally faster than Xfree 3. Kernel 2.4 has, at
least for me, given a significant framerate increase to certain SDL apps.
YMMV, and you should attack the real problem, which is the 32-bit thing.


Sam “Criswell” Hart <@Sam_Hart> AIM, Yahoo!:
Homepage: < http://www.geekcomix.com/snh/ >
PGP Info: < http://www.geekcomix.com/snh/contact/ >
Tux4Kids: < http://www.geekcomix.com/tux4kids/ >

All: thanks for the replies.

This may not be right, but I think I’ve answered some
of my own questions with this program:

#include        <stdio.h>

#define SIZE    (1024*768*4)

volatile char   m1[SIZE];
volatile char   m2[SIZE];

int   main()
   {
   long unsigned int   i;

   for (i=0 ; i<SIZE ; i++)
      {
      m1[i] = m2[i];
      }

   exit (0);
   }

to sort of model the updating of every byte in
a 1024x768 32bpp window.

It takes about 100 ms, which corresponds to the
performace that I’m getting with my SDL X11 window.

Doug Jerome layed down this smack:>

Using SDL 1.2.2 I have a 1024x768 32bpp
.
[bunch of stuff removed]
.
with Xfree86 3.3.6 and a 8 MB Matrox G200.


Douglas Jerome
http://hackerlabs.sourceforge.net


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Douglas Jerome
http://www.primenet.com/~jerome

Hello

Ever heard of Duff’s Device ??? Maybe it could speed this thing
a bit up.

Regards
painesOn Tuesday 11 September 2001 04:16, you wrote:

All: thanks for the replies.

This may not be right, but I think I’ve answered some
of my own questions with this program:

#include <stdio.h>

#define SIZE (10247684)

volatile char m1[SIZE];
volatile char m2[SIZE];

int main()
{
long unsigned int i;

 for (i=0 ; i<SIZE ; i++)
    {
    m1[i] = m2[i];
    }

 exit (0);
 }

hi

i’m wondering about the rather weak 2D blitting performance of SDL. i’ve
got an 800x600 window, doing a few blits each frame and then flipping
surfaces with SDL_Flip(). i’m getting about 50 frames per second that
way with a CPU usage of 100% which seems very slow for a 1400MHZ
Geforce3 machine. I’ve set the SDL_HWSURFACE and DOUBLEBUF flags, but
there’s no speed difference between hardware and software surfaces.
amazingly, my 4 year old 350Mhz P2 achieves almost the same framerate. i
don’t get this, how could anyone write fast 2D games with a lot of
blitting if just doing one small blit and a flip per frame limit the FPS
to 50?? am I doing something wrong?

here’s a code fragment:


screen = SDL_SetVideoMode(800,600, 32, SDL_HWSURFACE | SDL_DOUBLEBUF);

while(1)
{
SDL_BlitSurface(bg, NULL, screen, NULL);
SDL_Flip(screen);
}

very very basic you’ll agree…
interestingly, it has almost no impact on performance whether i blit
small areas or large ones. (above example blits full screen extents)

is there a way to check whether the SDL-surface actually uses hardware
acceleration or not?
should all surfaces have the same pixel format to increase blitting
speed? when i load a 24bit BMP to a surface i guess it becomes a 24bit
surface, in contrast to my 32bit screenmode, but how do i convert
surfaces to other color depths?

i’m a bit lost now since i expected the performance to be a LOT better.
please help!

btw, OS is windows2000, DX 8.1

regards
eik

You will probably find that the time is due to the SD_Flip call which is
waiting for a vertical retrace. This means that you will probably not
notice a change in frame rate until you blit a few thousand times. Try
this code fragment and change the number 1000 until you do notice an
effect on the fps:

screen = SDL_SetVideoMode(800,600, 32, SDL_HWSURFACE | SDL_DOUBLEBUF);

while(1)
{
for (int i = 0; i < 1000; ++i)
SDL_BlitSurface(bg, NULL, screen, NULL);
SDL_Flip(screen);
}

Eike Umlauf wrote:>

hi

i’m wondering about the rather weak 2D blitting performance of SDL. i’ve
got an 800x600 window, doing a few blits each frame and then flipping
surfaces with SDL_Flip(). i’m getting about 50 frames per second that
way with a CPU usage of 100% which seems very slow for a 1400MHZ
Geforce3 machine. I’ve set the SDL_HWSURFACE and DOUBLEBUF flags, but
there’s no speed difference between hardware and software surfaces.
amazingly, my 4 year old 350Mhz P2 achieves almost the same framerate. i
don’t get this, how could anyone write fast 2D games with a lot of
blitting if just doing one small blit and a flip per frame limit the FPS
to 50?? am I doing something wrong?

here’s a code fragment:


screen = SDL_SetVideoMode(800,600, 32, SDL_HWSURFACE | SDL_DOUBLEBUF);

while(1)
{
SDL_BlitSurface(bg, NULL, screen, NULL);
SDL_Flip(screen);
}

very very basic you’ll agree…
interestingly, it has almost no impact on performance whether i blit
small areas or large ones. (above example blits full screen extents)

is there a way to check whether the SDL-surface actually uses hardware
acceleration or not?
should all surfaces have the same pixel format to increase blitting
speed? when i load a 24bit BMP to a surface i guess it becomes a 24bit
surface, in contrast to my 32bit screenmode, but how do i convert
surfaces to other color depths?

i’m a bit lost now since i expected the performance to be a LOT better.
please help!

btw, OS is windows2000, DX 8.1

regards
eik

Eike Umlauf wrote:

i’m wondering about the rather weak 2D blitting performance of SDL.
i’ve
got an 800x600 window, doing a few blits each frame and then flipping
surfaces with SDL_Flip(). i’m getting about 50 frames per second that
way with a CPU usage of 100% which seems very slow for a 1400MHZ
Geforce3 machine. I’ve set the SDL_HWSURFACE and DOUBLEBUF flags, but
there’s no speed difference between hardware and software surfaces.

That’s not surprising. Bear in mind that there have been hardly any
advances in 2D performance in the last 5 years or so. Your Geforce3 is
not likely to be much better than a venerable Matrox Millenium at basic
blitting. On the other hand, stick that surface into a texture, use
OpenGL (or DirectX…) to render it as a pair of triangles, and see the
difference…

However, you still have some optimisations to make. Read on…

is there a way to check whether the SDL-surface actually uses hardware
acceleration or not?

Yes - this is in the documentation. Search in the ‘video’ section and
read the various function docs until you see what you need. I think it’s
to do with the pixel format of the screen surface, but you can easily
find this out.

should all surfaces have the same pixel format to increase blitting
speed? when i load a 24bit BMP to a surface i guess it becomes a 24bit
surface, in contrast to my 32bit screenmode, but how do i convert
surfaces to other color depths?

Again, this is in the docs. If you don’t convert the surfaces, SDL has
to convert them for you each time. You need to convert them before
blitting to ensure maximum performance by eliminating conversion with
each blit.

I suggest you read through the ‘video’ section of the documentation,
seeing what each function does. It shouldn’t take more than an hour, and
should give you a much better understanding of what you need to do to
improve performance.–
Kylotan

I suggest you read through the ‘video’ section of the documentation,
seeing what each function does. It shouldn’t take more than an hour, and
should give you a much better understanding of what you need to do to
improve performance.

…and 800x600 @ 32-bit is going to be slow on hardware accelerated
surfaces, too. If you need this, use OpenGL, but more likely, you don’t
need it.

I feel like a broken record.

–ryan.

hi

i’m wondering about the rather weak 2D blitting performance of SDL. i’ve
got an 800x600 window, doing a few blits each frame and then flipping
surfaces with SDL_Flip(). i’m getting about 50 frames per second that
way with a CPU usage of 100% which seems very slow for a 1400MHZ
Geforce3 machine. I’ve set the SDL_HWSURFACE and DOUBLEBUF flags, but
there’s no speed difference between hardware and software surfaces.
amazingly, my 4 year old 350Mhz P2 achieves almost the same framerate. i
don’t get this, how could anyone write fast 2D games with a lot of
blitting if just doing one small blit and a flip per frame limit the FPS
to 50?? am I doing something wrong?

Well maybe you should try blitting a lot more surfaces and then compare the
performance again. I believe hardware blitting is qeued so the real speed
increase compared to software blitting would be when you are blitting lot’s
of surfaces.

Also, you aren’t you running in fullscreen with a monitor refresh rate of
50Hz are you? SDL_Flip waits for VSync if I recall correctly.

Dirk Gerrits

is there a way to check whether the SDL-surface actually uses hardware
acceleration or not?
should all surfaces have the same pixel format to increase blitting
speed? when i load a 24bit BMP to a surface i guess it becomes a 24bit
surface, in contrast to my 32bit screenmode, but how do i convert
surfaces to other color depths?

SDL_DisplayFormat() is your friend then. Using DirectX you should have
quite good performances provided your surfaces are in your card memory
and use the right format.

On the other hand, none of the GNU/Linux targets can have acceptable
performances as I noticed in a previous mail (which didn’t got any
reply, strangly), which is a real problem I’d say. What are the plans to
get rid of that? Is something planned for 1.3 yet? Are some people
interested in helping making an OpenGL 2D backend for SDL, which would
fully take advantage of hardware acceleration?

Alex.–
http://www.gnurou.org

Are some people
interested in helping making an OpenGL 2D backend for SDL, which would
fully take advantage of hardware acceleration?

Alex.

Count me in. :slight_smile: I have a lot of 2D OpenGL experience (if I may say so myself
:). I would only be interested in this if it were to be built into SDL
itself though. An auxilliary library like SDL_opengl2d would just not do for
me.

So when an OpenGL display mode gets initialized, all SDL hardware surfaces
should be created as 2D textures and blits would be done with textured
quads. Software surfaces should stay in system memory and be copyed into
textures when blitting.

In a non-OpenGL display mode everything should of course function as normal,
with DirectDraw / X11 / and the rest.

But perhaps such a system is already planned for SDL 1.3?

Dirk Gerrits

----- Original Message -----
From: alexandrecourbot@linuxgames.com (Alexandre Courbot)
To:
Sent: Wednesday, 20 February, 2002 8:42
Subject: Re: [SDL] blit performance

Count me in. :slight_smile: I have a lot of 2D OpenGL experience (if I may say so myself
:). I would only be interested in this if it were to be built into SDL
itself though. An auxilliary library like SDL_opengl2d would just not do for
me.

That’s what I have in mind too. Making OpenGL a backend (or target) just
like X11, svgalib or fbcon.

So when an OpenGL display mode gets initialized, all SDL hardware surfaces
should be created as 2D textures and blits would be done with textured
quads. Software surfaces should stay in system memory and be copyed into
textures when blitting.

Exactly. You could also take advantage of accelerated alpha-blending, or
apply anti-aliasing to the final rendered image. Needless to say how
fast it will be with a good video card.

In a non-OpenGL display mode everything should of course function as normal,
with DirectDraw / X11 / and the rest.

Right. Problems might occur if the programmer uses OpenGL functions
along with the OpenGL backend, maybe. But I’m not even sure myself.

But perhaps such a system is already planned for SDL 1.3?

No idea. I don’t even know whether plans for 1.3 started or not. Anyone?

David Olofson once made a quick hack to use OpenGL rendering within SDL
for his Kobo Deluxe game. It’s not exactly a backend, as it’s included
with Kobo sources and redefines some SDL functions in practise. But it
worth a look anyway. You can get Kobo source at
http://olofson.net/skobo/. Then try ./configure --help for the args to
pass to use the hack. (SDL must be built with OpenGL capability of
course). He mentions it here:
http://www.libsdl.org/pipermail/sdl/2001-December/040562.html (the rest
of the thread is quite interesting, too)

If we could start hacking this around, it would be nice. Would be better
if we could have someone who knows a bit SDL’s internal structure, as
it’s not documented as far as I know. And we’d have to mess with SDL’s
guts to make it a backend.

So, interested?

Alex.–
http://www.gnurou.org

Hello All,

  Does someone can send me a "Makefile" example ?

  I don't have any time to send on this!  A
  default Makefile should be fun for beginner. . .
  (e.g. a default makefile with many SDL_libs in
  it, so when you need to use SDL_image, you open the
  makefile and uncomment the SDL_image section ...)

  I'm using SDL, SDL_image, SDL_mixer, and
  perhaps SDL_ttf (for later uses)(I think it's ttf...).


  Thx!

  I'm back ;)-- 

Best regards,
mgirard mailto:@Mathieu_Girard
dawn check http://dawn.linux-site.net soon
closed for now . . .

Count me in. :slight_smile: I have a lot of 2D OpenGL experience (if I may say so
myself

:). I would only be interested in this if it were to be built into SDL
itself though. An auxilliary library like SDL_opengl2d would just not do
for

me.

That’s what I have in mind too. Making OpenGL a backend (or target) just
like X11, svgalib or fbcon.

Well SDL itself chooses between those backends. The OpenGL ‘backend’ should
be
somthing that the programmer will choose. Why? Well SDL can’t know
beforehand
whether you are doing 2d blits along with actual pixel access or just 2d
blits.
The former would be a lot faster in X11, DX, etc., the latter would be much
better
off in OpenGL. So I think the current SDL_SetVideoMode( width, height, bpp,
SDL_OPENGL | moreflags ) interface should remain intact.

So when an OpenGL display mode gets initialized, all SDL hardware
surfaces

should be created as 2D textures and blits would be done with textured
quads. Software surfaces should stay in system memory and be copyed into
textures when blitting.

Exactly. You could also take advantage of accelerated alpha-blending, or
apply anti-aliasing to the final rendered image. Needless to say how
fast it will be with a good video card.

Yes, this is what I am doing in one of my unfinished games. I get over 700
fps
on my system with lots of sprites with alpha blending and rotation and
everything.
If this idea makes it into SDL 1.3, I can’t imagine how awesome it would be.
:slight_smile:

In a non-OpenGL display mode everything should of course function as
normal,

with DirectDraw / X11 / and the rest.

Right. Problems might occur if the programmer uses OpenGL functions
along with the OpenGL backend, maybe. But I’m not even sure myself.

If the OpenGL ‘backend’ is implemented as I said above, then it would not
interfere. The progammer just has to make sure the 3d stuff is rendered
first
followed by the blits. The otherway around could have some unwanted results.
(Especially depth buffer issuess I’d reckon.) However, if it is really a
backend like X11 and such, then it might interfere drastically.

But perhaps such a system is already planned for SDL 1.3?

No idea. I don’t even know whether plans for 1.3 started or not. Anyone?

According to the SDL General FAQ, “SDL 1.3 will be a near full rewrite of
the SDL functionality, based on what we’ve learned over the past four
years.”

David Olofson once made a quick hack to use OpenGL rendering within SDL
for his Kobo Deluxe game. It’s not exactly a backend, as it’s included
with Kobo sources and redefines some SDL functions in practise. But it
worth a look anyway. You can get Kobo source at
http://olofson.net/skobo/. Then try ./configure --help for the args to
pass to use the hack. (SDL must be built with OpenGL capability of
course). He mentions it here:
http://www.libsdl.org/pipermail/sdl/2001-December/040562.html (the rest
of the thread is quite interesting, too)

Interesting. I’ll have to give this a look.

If we could start hacking this around, it would be nice. Would be better
if we could have someone who knows a bit SDL’s internal structure, as
it’s not documented as far as I know. And we’d have to mess with SDL’s
guts to make it a backend.

Well I’ve already been reading through the SDL source a lot, but as the
FAQ states, “SDL 1.3 will be a near full rewrite” so the architecture could
change a lot.

But Sam Lantinga, the author of SDL, hangs around this mailinglist as well,
so he’s bound to join the discussion sometime. :wink:

So, interested?

Yep, said so before. :slight_smile:

Dirk Gerrits

Hello gang!

One of my friend want to try SDL in CygWin . . .
Does it supposed to work ? Or nobody tested it yet !?

If not, I’ll repost the result :wink:

(For now, SDL_perl is installed and working . . .)–
Best regards,
mgirard mailto:@Mathieu_Girard

One of my friend want to try SDL in CygWin . . .
Does it supposed to work ? Or nobody tested it yet !?

Yes, if you turn on the mingw option during compilation (-mno-cygwin). I
think you can also recompile the SDL library under cygwin to use both the
cygwin and sdl libraries, but I’m not sure.

-MarkOn Wed, 20 Feb 2002, mgirard wrote:


Mark K. Kim
http://www.cbreak.org/mark/
PGP key available upon request.

Yes it works with the latest release of Cygwin. You can compile SDL
directly, but not the others (SDL_Image, etc). However I’ll post the method
to use them, when I’ll find some time (stay tuned).

IoDream.> ----- Original Message -----

From: mgirard@microtecsecurite.com (Mathieu Girard)
To: “mgirard”
Sent: Wednesday, February 20, 2002 6:26 PM
Subject: [SDL] CygWin & SDL

Hello gang!

One of my friend want to try SDL in CygWin . . .
Does it supposed to work ? Or nobody tested it yet !?

If not, I’ll repost the result :wink:

(For now, SDL_perl is installed and working . . .)


Best regards,
mgirard mailto:mgirard at microtecsecurite.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


ifrance.com, l’email gratuit le plus complet de l’Internet !
vos emails depuis un navigateur, en POP3, sur Minitel, sur le WAP…
http://www.ifrance.com/_reloc/email.emailif