Adding noise to texture in real time

So I put together a simple procedure to do this. It loads a series of noise images into an array (I just used the noise tool in my image editor), then made the part of my destination texture that I wanted noisy translucent, then it draws the noise first, iterates the index of the noise array, then draws the destination texture on top of it.

It’s kind of a not-so-great method, though, for these reasons:

  1. It looks more like a translucent texture with noise underneath it than an actual noisy texture.
  2. With a decent frame rate, it becomes very obvious that I only have a few premade noise textures. Even with ten or more textures to choose from. Adding more will increase filesize and memory use substantially.

So, does anyone know a method for generating noise in real time and applying it to a texture? I can come up with some even kludgier nonsense (hey why don’t we just randomly generate 1000 noise textures at load time?) but honestly, I’m hoping there is a more legitimate method at my disposal.

Thanks!

It would be more realistic to draw the noise last, using SDL_BLENDMODE_ADD. That would of course mean that your pre-prepared noise textures would need to have the noise only where you want it, with the rest black.

You refer to “iterating” the index to the noise array. I’d suggest that instead you choose the index ‘randomly’ (with a check that you don’t use the same index twice in succession); that should help disguise the fact that you only have a limited number of noise textures.

Have a look at Simple Noise Node | Shader Graph | 7.1.8 to see how Unity generates noise programmatically in real-time, with code included.

Pre-preparing a mask is actually the complication here, since I’m rasterizing my textures from SVG at runtime. This way I can scale for any resolution (and zoom in and stuff) without the nasty blur and quantization of raster scaling. Then the noise is raster, since I want it to be at the finest possible grain. I know, it’s like a Dr. Frankenstein experiment, but hey, pushing things to their limit (and beyond) is what makes programming fun!

You know, I bet I could generate a mask by extending the svg specification to make certain shapes conditionally transparent (not… that I’ve already… developed… something like that…) and then rendering to a separate texture with a particular blend mode. I seem to recall that at least one blend mode maintains the minimum alpha value, when I was working on a different issue.

You know what, when you put it that way, I could just generate one relatively large noise texture, set a viewport, and draw it tiled at a random offset. Since there’s (supposed to be) no correlation between pixel values, you probably won’t be able to tell the difference. It probably doesn’t even need to be that big of a texture.

Yeah, generating on the fly by iterating pixel by pixel might not be a bad option if all else fails, but I think this is going to end up being the kind of problem that falls exactly into that awkward zone where it’s too big to do on the CPU in a timely manner but too small to really benefit from GPGPU. It could always happen during loading, if it comes to that.

Yeah, this example is from a node-based framework that generates shader code. It runs plenty fast as a shader, but might be slower than you’d like running per-pixel on a GPU.

I believe SDL still doesn’t support custom shaders, but SDL_GPU does. It’s very useful if your project is in that spot where you need more power than core SDL provides, but don’t actually need full bare-metal 3D.