[question] Creating an SDL_Texture from an array

Hello, I am writing a program to generate and display two dimensional cellular automata as fast as possible.

In my quest for speed I am currently under the assumption that the fastest way to display the ca on screen is to write the ca data (0s and 1s) to a texture with some translation of the values to the desired rgba value (0 = black = 0, 0, 0, 0 and 1 = magenta = 255, 0, 255, 255). For the purpose of this question lets say that two arrays are neccessary(ca value & pixel color). I have plans to merge them once I can reliably control the display.

I am finding it difficult to use SDL_UpdateTexture for these purposes. I have some inherent misunderstanding of how the update texture reads the given pixel array, and was hoping to find some clarity.

ignoring the ca’s this is essentially what I have so far.

       //init
    SDL_Init(SDL_INIT_VIDEO);
    SDL_Window *win;
    SDL_Renderer *rend;
    SDL_Texture *texture;

    //define
    win = SDL_CreateWindow("C.H.A.O.S.", 
                            SDL_WINDOWPOS_CENTERED, 
                            SDL_WINDOWPOS_CENTERED, 
                            800, 800, 
                            SDL_WINDOW_BORDERLESS);
    
    rend = SDL_CreateRenderer(win, 
                            -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
    
    t_length = 2
    t_width = 2
    texture = SDL_CreateTexture(rend, 
                                SDL_PIXELFORMAT_ARGB8888, 
                                SDL_TEXTUREACCESS_STREAMING, 
                                t_length, 
                                t_width);

    
    //pixels
    int *pixels = malloc(( t_width * t_length* 4 * sizeof(int)));

    for (int i=0; i<t_width * t_length * 4; i++){
        *(pixels + i) = 0;
    }  

    for (int i=0; i<t_width; i++) {
        *(pixels + i) = 255;
    }


    SDL_UpdateTexture(texture, NULL, pixels, t_width * sizeof(*pixels));

    SDL_RenderClear(rend);
    SDL_RenderCopy(rend, texture, NULL, NULL);
    SDL_RenderPresent(rend);

the final pixel array is [255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
this displays two blue boxes on the top of the screen and two black on the bottom.

I imagined the array would be interpreted as [r, g, b, a, r, g, b, a…], but that is where my intuition dies. Any help would be appreciated.

best, Chaotomata.

I realize I have the texture format set to argb and used rgba in the example, but the spirit of the intuition is more so the point.

It’s hard to tell from your description, but I think your problem is that pixels is an int pointer, not a byte pointer. Every element in your array is thus four bytes long, and is thus an entire ARGB value. Changing the type of pixels to a Uint8* is probably what you want.

If you really want “as fast as possible” then use shader code! You can still use an SDL2 wrapper, but drill through to the GPU using the SDL_GL_ functions, or use SDL_gpu if you prefer.

Here’s the classic Conway’s Game of Life running on the GPU, the update speed is limited by the display refresh rate, otherwise some generations wouldn’t be visible at all!

in fact it is not recommended to use SDL_UpdateTexture for streaming textures. Instead, use SDL_LockTexture and SDL_UnlockTexture.

#include <SDL2/SDL.h>

#include <stdio.h>

int main()
{


  //init
  SDL_Init(SDL_INIT_VIDEO);
  SDL_Window *win;
  SDL_Renderer *rend;
  SDL_Texture *texture;
  
  //define
  win = SDL_CreateWindow("C.H.A.O.S.", 
			 SDL_WINDOWPOS_CENTERED, 
			 SDL_WINDOWPOS_CENTERED, 
			 800, 800, 
			 SDL_WINDOW_BORDERLESS);
  
  rend = SDL_CreateRenderer(win, 
                            -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
  
  int t_length = 2;
  int t_width = 2;
  texture = SDL_CreateTexture(rend, 
			      SDL_PIXELFORMAT_ARGB8888, 
			      SDL_TEXTUREACCESS_STREAMING, 
			      t_length, 
			      t_width);
  
  
  //pixels
  uint8_t *pixels;
  int pitch = t_width * t_length* 4;
  
  SDL_LockTexture(texture, NULL, (void **)&pixels, &pitch);
  
  for (int i=0; i<t_width * t_length * 4; i++){
    *(pixels + i) = 0;
  }  
  
  for (int i=0; i<t_width; i++) {
    *(pixels + i) = 255;
  }
  
  
  SDL_UnlockTexture(texture);
  
  SDL_RenderClear(rend);
  SDL_RenderCopy(rend, texture, NULL, NULL);
  SDL_RenderPresent(rend);
  
  SDL_Delay(3000);
  
  SDL_DestroyWindow(win);
  
  SDL_Quit();
  
  return 0;
}

Thank you! It works perfectly, and its faster than I ever could’ve imagined.

I thought textures always use the GPU …