SDL_BlitSurface accelerated for sure?

Hi.

I am using SDL_BlitSurface to draw background tiles in a platform
game. The tiles are stored in video memory, using
SDL_CreateRGBSurface.

When i query SDL_GetVideoInfo, it tells me I have, amongst others,
hardware->hardware source colorkey blitting accelerated.

But still i notice quite a slow down (from 60 to 30 Hz in refresh
rate) when drawing 20*15 = 300 tiles, sized 32x32 pixels, on a 640x480
fullscreen display in Windows XP running on a AMD700 machine with a
Geforce2 32mb card.

I find this quite surprising: is it possible that my hardware blitting
is this slow? Or is it actually software? How can I find out, when
SDL_GetVideoInfo tells me it is accelerated?

I guess my question is: can I trust SDL_GetVideoInfo?

One idea is to use OpenGL in orthogonal (orthographic?) mode
instead of SDL_BlitSurface, to ensure my graphics hardware is
actually used by my program. Is this a common solution by
SDL folks?

Thanks,

/Olof

ps. Relevant snippets of code follow:

// *** Set video mode ***
screen=SDL_SetVideoMode(640,480,32,
SDL_FULLSCREEN|SDL_DOUBLEBUF|SDL_HWSURFACE)))
if(!screen) exit(-1);

// *** Creation of the tilemap ***
// (I have the tilemap stored in a sw surface
// and blit it to the created hw surface)
// The source softare tilemap is 512x512 pixels big.
SDL_Surface *hw_surface =
SDL_CreateRGBSurface(SDL_HWSURFACE|SDL_SRCCOLORKEY,
sw_surface->w, sw_surface->h, 32,
0x00ff0000, 0x0000ff00, 0x000000ff, 0);
SDL_BlitSurface(sw_surface, NULL, hw_surface, NULL);
SDL_SetColorKey(hw_surface, SDL_SRCCOLORKEY, 0x000000);

// *** SDL_GetVideoInfo reading ***
// All of these says “Yes” on my computer.
// When I say “sprite blit” I mean colorkey blit.
const SDL_VideoInfo *vid_info = SDL_GetVideoInfo();
cout << "Software->hardware blit: " <<
(vid_info->blit_sw?“Yes”:“No”) << endl;
cout << "Software->hardware sprite blit: " <<
(vid_info->blit_sw_CC?“Yes”:“No”) << endl;
cout << "Hardware->hardware sprite blit: " <<
(vid_info->blit_hw_CC?“Yes”:“No”) << endl;
cout << "Solid rectangle draw: " <<
(vid_info->blit_fill?“Yes”:“No”) << endl;

// *** Blitting of a tile ***
// This should be hardware accelerated… Or?
SDL_BlitSurface(hw_surface, &srcRect, screen, &dstRect);

Hi.

I am using SDL_BlitSurface to draw background tiles in a platform
game. The tiles are stored in video memory, using
SDL_CreateRGBSurface.

When i query SDL_GetVideoInfo, it tells me I have, amongst others,
hardware->hardware source colorkey blitting accelerated.

But still i notice quite a slow down (from 60 to 30 Hz in refresh
rate) when drawing 20*15 = 300 tiles, sized 32x32 pixels, on a 640x480
fullscreen display in Windows XP running on a AMD700 machine with a
Geforce2 32mb card.

I find this quite surprising: is it possible that my hardware blitting
is this slow? Or is it actually software? How can I find out, when
SDL_GetVideoInfo tells me it is accelerated?

My guess is that the blit is not hardware accelerated, and the slowdown
is eating an entire vertical refresh sync period, due to SDL_DOUBLEBUF
sync’ing to vertical refresh.

Check the flags in the returned surfaces for SDL_SetVideoMode and
SDL_CreateSurfaceRGB to make sure that the surfaces are in hardware.
Also check the flags after the blit to make sure the surface didn’t
get kicked out to system memory because of hardware blit limitations.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

One idea is to use OpenGL in orthogonal (orthographic?) mode
instead of SDL_BlitSurface, to ensure my graphics hardware is
actually used by my program. Is this a common solution by
SDL folks?

Yes, this is commonly done by people assuming that their audience
is running current PC hardware.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Sam Lantinga: “Re: [SDL] SDL_BlitSurface accelerated for sure?”…

Thanks for you answers. I have tried to check the surfaces
returned from SDL_SetVideoMode and SDL_CreateRGBSurface,
on whether they are situated in hardware or software memory.

Using the following code, I got the (surprising) result
that the tilemap is neither a SDL_HWSURFACE nor a
SDL_SWSURFACE:

int hw = (s->flags)&SDL_HWSURFACE;
if(hw)
cout << “[hw]”;

int sw = (s->flags)&SDL_SWSURFACE;
if(sw)
cout << “[sw]”;

( the result was “” for the tilemap surface and
"[hw]" for the screen surface )

So I checked the SDL_video.h file and found the definitions
of SDL_HWSURFACE and SDL_SWSURFACE:

#define SDL_SWSURFACE 0x00000000
#define SDL_HWSURFACE 0x00000001

… which is not quite what I expected. This means
that you can’t do an “or-test” like I did above to
figure out that a surface lies in system memory. What
has to be done is check whether it lies in video
memory via flags&SDL_HWSURFACE and if it doesn’t,
conclude it lies in system memory:

int hw = (s->flags)&SDL_HWSURFACE
if(hw)
cout << “[hw]”;
else
cout << “[sw]”;

This explains why my tilemap was “neither” in
system- or videomemory: SDL didn’t work as I
expected. Or am I mistaken?

With this new test, I find that the tilemap is
actually not in video memory. This basically
explains why I got such a harsh slowdown when
blitting on the whole 640x480 screen.

But: SDL_GetVideoInfo tells me hardware surfaces
can be created! At least, the following output tells
me that:

Surface ‘tilemap’: 512x512 [sw]
Surface ‘screen’: 640x480 [hw][d.buff]
Hardware acceleration information ***
Available video memory: 31376
Hardware surfaces createable: Yes
Hardware->hardware blit: Yes
Hardware->hardware colorkey blit: Yes
Hardware->hardware alpha blit: No
Software->hardware blit: Yes
Software->hardware colorkey blit: Yes
Software->hardware alpha blit: No
Solid rectangle draw: Yes

How is it possible that I don’t get
a video surface for my tilemap even though a
640x480 32 bit double buffered screen would
consume ~2.4mb video memory and my tilemap
~2mb? My 32mb Geforce2 card should have
capacity for this, shouldn’t it? I do get
a hardware surface for the 640x480 screen
though, as can be seen in row two above.

I have not yet checked whether SDL_BlitSurface
fails due to hardware limitations (switching
between windows and my applications screen?)
since my tilemap didn’t reside in video
memory in the first place.

Thanks for any help. Right now I am feeling
forced to resort to OpenGL instead of SDL’s
standard blit functionality…

/Olof

#> I find this quite surprising: is it possible that my hardware blitting
#> is this slow? Or is it actually software? How can I find out, when
#> SDL_GetVideoInfo tells me it is accelerated?#
#My guess is that the blit is not hardware accelerated, and the slowdown
#is eating an entire vertical refresh sync period, due to SDL_DOUBLEBUF
#sync’ing to vertical refresh.

#Check the flags in the returned surfaces for SDL_SetVideoMode and
#SDL_CreateSurfaceRGB to make sure that the surfaces are in hardware.
#Also check the flags after the blit to make sure the surface didn’t
#get kicked out to system memory because of hardware blit limitations.

#See ya,

-Sam Lantinga, Software Engineer, Blizzard Entertainment

#_______________________________________________
#SDL mailing list
#SDL at libsdl.org
#http://www.libsdl.org/mailman/listinfo/sdl

figure out that a surface lies in system memory. What
has to be done is check whether it lies in video
memory via flags&SDL_HWSURFACE and if it doesn’t,
conclude it lies in system memory:

Yeah, that was the way it was designed, so a simple if-else test could be used.

How is it possible that I don’t get
a video surface for my tilemap even though a
640x480 32 bit double buffered screen would
consume ~2.4mb video memory and my tilemap
~2mb?

Many video cards can’t create single hardware surfaces larger than the
current screen resolution. That might explain what’s going on for you.

OpenGL is also a good option, if you’re targeting PC hardware.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment