# Gaussian Blur in C

I’m trying to create a Gaussian blur in C using SDL.

Here is my function:

We admit that the param surface is a grayscale image (that’s why i only use the r).

``````SDL_Surface* gaussian_blur(SDL_Surface* surface) {
int w = surface->w;
int h = surface->h;

SDL_Surface* res = SDL_CreateRGBSurface(0, w, h, 32, 0, 0, 0, 0);

for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
Uint8 r = 0;

for (int i = -2; i <= 2; i++) {
for (int j = -2; j <= 2; j++) {
Uint32 pixel = get_pixel(surface, x+i, y+j);
double weight = core[i+2][j+2];

r += pixel*weight;
}
}

Uint32 nPixel = SDL_MapRGB(res->format, r, r, r);
put_pixel(res, x, y, nPixel);
}
}

free_surface(surface);
return res;
}
``````

My core is defined as it :

``````double core[KERNEL_SIZE][KERNEL_SIZE] = {
{1.0/273.0, 4.0/273.0, 7.0/273.0, 4.0/273.0, 1.0/273.0},
{4.0/273.0, 16.0/273.0, 26.0/273.0, 16.0/273.0, 4.0/273.0},
{7.0/273.0, 26.0/273.0, 41.0/273.0, 26.0/273.0, 7.0/273.0},
{4.0/273.0, 16.0/273.0, 26.0/273.0, 16.0/273.0, 4.0/273.0},
{1.0/273.0, 4.0/273.0, 7.0/273.0, 4.0/273.0, 1.0/273.0}
};
``````

There is some function use in the gaussian.

``````Uint32 get_pixel(SDL_Surface* surface, int x, int y) {
int w = surface->w;
int h = surface->h;

if (surface != NULL && x >= 0 && x < w && y >= 0 && y < h) {
Uint32* pixels = (Uint32*)surface->pixels;
return pixels[y * w + x];
}
return 0;
}

void put_pixel(SDL_Surface* surface, int x, int y, Uint32 pixel) {
int w = surface->w;
int h = surface->h;

if (surface != NULL && x >= 0 && x < w && y >= 0 && y < h) {
Uint32* pixels = (Uint32*)surface->pixels;
pixels[y * w + x] = pixel;
}
}
``````

The picture is the result of the function : Result of the function

I’ve searched on different sites but I can’t find anything that could help me.

If anyone has any ideas on how to go about it, I’d love to hear from you.

Pygame-ce has a gaussian blur: https://github.com/pygame-community/pygame-ce/blob/56f30c38c2c2c0b43cfa9099f6470e2e9712375e/src_c/transform.c#L3066

1 Like

Don’t use them because they are absolutely unnecessary. You’re using the most efficient language on the planet, and you’re wasting CPU time calling functions to do what you can do directly in the loop body. In addition, for each pixel you execute a complex conditional statement that will always met anyway. Even if the branch predictor notices this, these two functions are completely unnecessary.

Implement all calculations inside a loop, don’t be afraid of long functions. However, if you need additional checks, perform them once, at the beginning of the `gaussian_blur` function (add assertions if necessary).

Multiplying 32-bit pixel value (most likely containing 24-bit sRGB) with a double and then adding it to 8-bit integer didn’t end very well.

Since 32-bit integer most likely contains data from multiple channels, it is meaningless to do floating point multiplication on it. You could extract one of the channels by pixel&0xFF or (pixel>>8)*0xFF or (pixel>>16)&0xFF. Even then, sRGB channels are not on a linear scale, resulting in a darkening distortion, and adding floating point multiplied result to 8-bit integer truncates the result, resulting in rounding errors.