Palettized RGB Surface in SDL 2 much slower than in SDL 1

Hi.

First i am new to SDL and for a long time I didn’t programm anything… :wink:

So, I want to learn some old graphics programming in the claasic 320x200x8 VGA Mode.

I managed to get it working. But with SDL2 it’s much slower than with SDL1.
I was wondering if this is normal or something with my code is wrong?

I use a 8Bit RGBSurface, a 32bit RGBSurface, and texture in streaming mode.

this is how I create the window ect. (shortened)

Code:

SDL_Window *window = NULL;
SDL_Renderer *renderer = NULL;
SDL_Texture *texture = NULL;
SDL_Surface *helper_surface = NULL;
SDL_Surface *video_buffer = NULL;

window = SDL_CreateWindow(“SDL2 Stars”, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, XRES, YRES, vmodeflag);

renderer = SDL_CreateRenderer(window, -1, 0);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, “linear”);
SDL_RenderSetLogicalSize(renderer, XRES, YRES);

video_buffer = SDL_CreateRGBSurface(0, XRES, YRES, 8, 0, 0, 0, 0);

helper_surface = SDL_CreateRGBSurface(0, XRES, YRES, 32, 0, 0, 0, 0);

texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_ARGB8888, SDL_TEXTUREACCESS_STREAMING, XRES, YRES);

where XRES = 320
YRES = 200
vmodeflag one of SDL_WINDOW_SHOWN, SDL_WINDOW_FULLSCREEN, SDL_WINDOW_FULLSCREEN_DESKTOP

and this is the function that I call to make the pixels I draw to the 8bit Surface visible on screen.

Code:

void vga_update()
{
void *pixels;
int pitch;

/* blit the 320x200x8 Surface to a 32bit helper /
/
Surface for pixel format conversion (8bit palettized to /
/
32bit). SDL2 Textures don’t work with 8bit pixels formats. */
SDL_BlitSurface(video_buffer, NULL, helper_surface, NULL);

/* move it to the texture */
SDL_LockTexture(texture, NULL, &pixels, &pitch);
SDL_ConvertPixels(helper_surface->w, helper_surface->h,
helper_surface->format->format,
helper_surface->pixels, helper_surface->pitch,
SDL_PIXELFORMAT_ARGB8888,
pixels, pitch);
SDL_UnlockTexture(texture);

  /* maybe slower than SDL_ConvertPixels ?? */
  //SDL_UpdateTexture(texture, NULL, helper_surface->pixels, helper_surface->pitch);

/* and make it visible on screen */
//SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
}

In SDL1 it’s just Blitting the surface I draw on, to the screen surface and then call SLD_Flip.
That’s much faster …

Is there anything wrong with my SDL2 code or any way to do it without texture and renderer?

Hm, if I create the renderer with the

SDL_RENDERER_SOFTWARE

flag I get the same result (speed) as with SDL 1.

As mentioned i am new to graphics programming. Can someone explain this behavior to me?
Maybe the graphics card wihch the hardware renderer uses (correct me if I’m wrong) is
so slow? … it’s an intel hd2000 (i3-2100) …

The problem is not easy to explain, but indexed colors are not a major focus of modern GPUs, especially the chea ones. Always use 32 bit color, and you do not have this problem. Indexed colors give you an advantage in software rendering though, because the throughput of data from the program to the GPU is reduced.

Hm, if I create the renderer with the

SDL_RENDERER_SOFTWARE

flag I get the same result (speed) as with SDL 1.

This is because software renderer uses SDL_Surfaces internally, so the
behavior (and performance) becomes similar to SDL1.2 blitting.

As for the GPU slow-down, I don’t see anything wrong with your code.
Maybe someone with a keener eye can spot something.

Have you tried creating the texture without the
SDL_TEXTUREACCESS_STREAMING flag, and/or not using Lock/Unlock
functions?

What is your renderer backend?On Sat, 24 Jan 2015 02:02:17 +0000 “sanitowi” <michael.straube1 at gmx.de> wrote:


driedfruit

I’ll give a few tips since i’ve worked on this myself quite a bit for the speed.

  1. Locking and Unlocking the texture here like so doesn’t do anything:

Code:

/* move it to the texture */
SDL_LockTexture(texture, NULL, &pixels, &pitch);
SDL_ConvertPixels(helper_surface->w, helper_surface->h,
helper_surface->format->format,
helper_surface->pixels, helper_surface->pitch,
SDL_PIXELFORMAT_ARGB8888,
pixels, pitch);
SDL_UnlockTexture(texture);

  1. Create the Surface then update the pixel format once, then you don’t need to convert it every time you blit and update.

Try something like this:

Code:

SDL_Window *window = NULL;
SDL_Renderer *renderer = NULL;

SDL_Texture *texture = NULL;
SDL_Surface *helper_surface = NULL;
SDL_Surface *video_buffer = NULL;

Uint32 redMask, greenMask, blueMask, alphaMask;

// Initalize Color Masks.
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
redMask = 0xff000000;
greenMask = 0x00ff0000;
blueMask = 0x0000ff00;
alphaMask = 0x000000ff;
#else
redMask = 0x000000ff;
greenMask = 0x0000ff00;
blueMask = 0x00ff0000;
alphaMask = 0xff000000;
#endif

SDL_Init(SDL_INIT_VIDEO);

// Set hint before you create the Renderer!
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, “0”);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, “linear”);

window =
SDL_CreateWindow(windowTitle.c_str(), SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, 320 , 200,
SDL_SHOWN | SDL_WINDOW_FULLSCREEN_DESKTOP);

renderer =
SDL_CreateRenderer(globalWindow, -1, SDL_RENDERER_ACCELERATED |
SDL_RENDERER_TARGETTEXTURE);

SDL_SetRenderDrawColor(renderer , 0x00, 0x00, 0x00, 0xFF);

// Now create your surface and convert the pixel format right away!
// Converting the pixel format to match the texture makes for quicker updates, otherwise
// It has to do the conversion each time you update the texture manually. This slows everything down.
// Do it once, and don’t have to worry about it again.

helper_surface =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

helper_surface =
SDL_ConvertSurfaceFormat(
helper_surface, SDL_PIXELFORMAT_ARGB8888, 0);

video_buffer =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

video_buffer =
SDL_ConvertSurfaceFormat(
video_buffer, SDL_PIXELFORMAT_ARGB8888, 0);

texture = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STREAMING,
helper_surface->w, helper_surface->h);

SDL_SetTextureBlendMode(texture , SDL_BLENDMODE_BLEND);

// Sure clear the screen first… always nice.
SDL_RenderClear(renderer);
SDL_RenderPresent(renderer);

// New update
update_vga()
{

SDL_Rect rect;

rect.x = 0;
rect.y = 0;
rect.w = 320;
rect.h = 200;

SDL_BlitSurface(video_buffer, NULL, helper_surface, NULL); 

SDL_UpdateTexture(texture, &pick, helper_surface->pixels, helper_surface->pitch);

SDL_RenderCopy(renderer, texture, NULL, NULL); 
SDL_RenderPresent(renderer); 

}

Let me know how it works out for you. I use this myself and it’s very fast when done in this order.

And a small tip, if your loading an image into the surface. Then convert the surface once after the image is loaded.

Thanks for your detailed answers!

I will look at them and try some suggestions … the next days.

… btw I am not loading bitmaps, I just want to do some pixel manipulations in software and then “blit” to screen, e.g. starfields, palette effects ect. … the old demo stuff :wink:

Magnet wrote:

// Now create your surface and convert the pixel format right away!
// Converting the pixel format to match the texture makes for quicker updates, otherwise
// It has to do the conversion each time you update the texture manually. This slows everything down.
// Do it once, and don’t have to worry about it again.

helper_surface =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

helper_surface =
SDL_ConvertSurfaceFormat(
helper_surface, SDL_PIXELFORMAT_ARGB8888, 0);

video_buffer =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

video_buffer =
SDL_ConvertSurfaceFormat(
video_buffer, SDL_PIXELFORMAT_ARGB8888, 0);

[/code]

Let me know how it works out for you. I use this myself and it’s very fast when done in this order.

@Magnet
I looked at your suggested code. I will try that, but two questions.
I want a 8bit palettized surface as video_buffer, will your code still work then?
I figured out for me, in this case I need to convert from 8bit to 32bit everytime befor I update the texture.
If it is not needed, the I wouldn’T need the helper_surface. Am I wrong?

Code:

// New update
update_vga()
{

SDL_Rect rect; 

rect.x = 0; 
rect.y = 0; 
rect.w = 320; 
rect.h = 200; 

SDL_BlitSurface(video_buffer, NULL, helper_surface, NULL); 

SDL_UpdateTexture(texture, &pick, helper_surface->pixels, helper_surface->pitch); 

SDL_RenderCopy(renderer, texture, NULL, NULL); 
SDL_RenderPresent(renderer); 

}

What is rect and &pick here?

sanitowi wrote:

Magnet wrote:

// Now create your surface and convert the pixel format right away!
// Converting the pixel format to match the texture makes for quicker updates, otherwise
// It has to do the conversion each time you update the texture manually. This slows everything down.
// Do it once, and don’t have to worry about it again.

helper_surface =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

helper_surface =
SDL_ConvertSurfaceFormat(
helper_surface, SDL_PIXELFORMAT_ARGB8888, 0);

video_buffer =
SDL_CreateRGBSurface(
SDL_SWSURFACE, 320, 200, 32,
redMask, greenMask, blueMask, alphaMask);

video_buffer =
SDL_ConvertSurfaceFormat(
video_buffer, SDL_PIXELFORMAT_ARGB8888, 0);

[/code]

Let me know how it works out for you. I use this myself and it’s very fast when done in this order.

@Magnet
I looked at your suggested code. I will try that, but two questions.
I want a 8bit palettized surface as video_buffer, will your code still work then?
I figured out for me, in this case I need to convert from 8bit to 32bit everytime befor I update the texture.
If it is not needed, the I wouldn’T need the helper_surface. Am I wrong?

Code:

// New update
update_vga()
{

SDL_Rect rect; 

rect.x = 0; 
rect.y = 0; 
rect.w = 320; 
rect.h = 200; 

SDL_BlitSurface(video_buffer, NULL, helper_surface, NULL); 

SDL_UpdateTexture(texture, &pick, helper_surface->pixels, helper_surface->pitch); 

SDL_RenderCopy(renderer, texture, NULL, NULL); 
SDL_RenderPresent(renderer); 

}

What is rect and &pick here?

That is correct. Keep your 8 bit If you need it. … you might not need to convert it if you bitl to a converted surface already. You’ll have to Play with that Some… just the main thing is to convert it before texture update and try to limit conversion if you can to save speed.

Oops pick is suppose to be rect… typo… and that copies the specific dimensions of surface… null works too. I like putting in the size though.