How to read ARGB pixel data from SDL2 surface and store them in an array?

I’ve been trying for hours now to figure out why this piece of code is giving me seg fault, but can’t figure out why.

 void pixel::Texture::LoadTexture(const char* filepath)
 {
    SDL_Surface* image = IMG_Load(filepath);
    SDL_Surface* formattedImage = SDL_ConvertSurfaceFormat(image, SDL_PIXELFORMAT_ARGB8888, 0);

    textureBuffer = new Uint32[formattedImage->pitch * formattedImage->h];

    width = formattedImage->w;
    height = formattedImage->h;
    pitch = formattedImage->pitch;

    SDL_LockSurface(formattedImage);

    Uint32* pixels = (Uint32*)formattedImage->pixels;

    int bpp = formattedImage->format->BytesPerPixel;
    
    for(int y = 0; y < height; y++)
        for(int x = 0; x < width; x++)
            textureBuffer[(formattedImage->pitch * y) + (x * bpp)] = pixels[(formattedImage->pitch * y) + (x * bpp)];

    SDL_UnlockSurface(formattedImage);

    SDL_FreeSurface(image);
    SDL_FreeSurface(formattedImage);
 }

Essentially, what I’m doing is I have an array of unsigned 32 bit integers called the texture buffer. All I do is load an image into a surface then convert it’s pixel format to ARGB then start reading each pixel and storing it inside my texture buffer array but for some reason the pixels array gets an out of bounds error at some point in runtime and I can’t figure out exactly why.

textureBuffer[(formattedImage->pitch * y) + (x * bpp)] = pixels[(formattedImage->pitch * y) + (x * bpp)];

The ->pitch and the bpp variables are calculated in bytes, but textureBuffer is an array of Uint32.

Which is to say, this looks like it’s might be asking for, say, the 8th byte in the array, but it’s actually accessing the 32nd. Likewise for pixels.

–ryan.