Weird SDL behaviour?

Hello,

I am just beginning SDL programming and, while trying to speed up a
multi-layer vertical-scroller, I stepped into the following weird behaviour
(skip to the end of the mail if you don’t want details): my layer update
function was something like this (pseudo-code):

/* V1: layer is 640x502, screen is 640x480 */
srcrect.w = screen->w;
srcrect.h = screen->h;
srcrect.x = 0;
srcrect.y = current_y_position;
SDL_BlitSurface(layer, srcrect, screen, NULL);

current_y_position += scroll_speed;
if (current_y_position > N) {
/* - scroll down ‘layer’ by N pixels
* - redraw top N pixels in ‘layer’
* - set current_y_position to 0
*/
}

Where ‘layer’ is an SDL_Surface whose width is the same as screen’s and height
is N pixels higher than screen’s. In my code, I simply blit from my source
layer to the screen and, once every N blits, I scroll down the whole layer
and redraw N new pixels (based on a map). Note that scrolling down the whole
layer requires several SDL blits and is thus quite slow.

In order to speed up the scrolling, I wanted to precalculate the whole layer
before starting. This way, I obtain a very “high” layer from which I can blit
to the screen without scrolling anymore. The new layer update function is
just

/* V2: layer is 640x960, screen is 640x480 */
srcrect.w = screen->w;
srcrect.h = screen->h;
srcrect.x = 0;
srcrect.y = current_y_position;
SDL_BlitSurface(layer, srcrect, screen, NULL);

Here comes the problem: much to my surprise, the second version is much slower
than the first one. Taking some measurements, it seems that in the first
version the SDL_BlitSurface takes something between 7 and 14 msecs, whereas
in the second version the same call takes between 15 and 25 msecs.

That is, it seems that SDL_BlitSurface is slower if the source bitmap is much
bigger than the destination one. This seems counter-intuitive to me… is
anyone able to explain me the trick ? Or am I doing something wrong ?
Clipping problems are out of question, since I also tried calling directly
SDL_LowerBlit().

BTW, the machine is a Pentium III 800 Mhz, gfx card is an ATI IIc (no hardware
accels :frowning: ), screen depth is 32 bpp.

Thanks,
Gianluca

That is, it seems that SDL_BlitSurface is slower if the source bitmap is much
bigger than the destination one. This seems counter-intuitive to me… is
anyone able to explain me the trick ? Or am I doing something wrong ?
Clipping problems are out of question, since I also tried calling directly
SDL_LowerBlit().

BTW, the machine is a Pentium III 800 Mhz, gfx card is an ATI IIc (no hardware
accels :frowning: ), screen depth is 32 bpp.

What operating system are you using? Can you post a link to complete source
and data that shows the problem?

Thanks!
-Sam Lantinga, Software Engineer, Blizzard Entertainment