Blit texture to texture?

Something like this has been asked before but many-many years ago. So maybe things have changed since then.

Can I change a texture dynamically by drawing onto it? Similarly to Surface’s Blit I want to change my textures. Kind of rendering them onto each other without rendering them to a window.

Right now in my little game project I CreateTextureFromSurface and then Destroy it:

How efficient is that? I don’t see any performance issues, but still…

I’m writing a Digger clone and my digger eats the field (changes the field surface) which I need to render each frame, where I need to use textures.

Thanks.

Of course you can — all you have to do is create a texture to use as an auxiliary back buffer (that is, as a render target). To do this, create the back buffer texture using the SDL_CreateTexture function and specify the SDL_TEXTUREACCESS_TARGET flag. To render other textures on this texture, first set the render target with SDL_SetRenderTarget and then copy the textures (or portions of them) with SDL_RenderCopy. After all, restore the original render target with SDL_SetRenderTarget, specifying the texture handle as null.

If you need the best rendering performance, use hardware rendering — textures are the simplest way. However, to have access to hardware acceleration and the ability to set a texture as a target, create a renderer using the SDL_CreateRenderer function and the SDL_RENDERER_ACCELERATED and SDL_RENDERER_TARGETTEXTURE flags.

If you need to know how many times faster hardware texture rendering is than software surface rendering, just do a simple benchmark.

Thanks @furious-programming . I read about this solution but I thought it’s some kind of a workaround not an official API usage. Hijacking the render target… I was at least expecting creation of a new renderer.
Anyway I’m trying to refactor my code and here is another problem I’m dealing with. I need to get RGBA of a pixel at X,Y. With Surface I’m doing it like this:

// At returns the pixel color at (x, y)
func (surface *Surface) At(x, y int) color.Color {
	pix := surface.Pixels()
	i := int32(y)*surface.Pitch + int32(x)*int32(surface.Format.BytesPerPixel)
	r, g, b, a := GetRGBA(*((*uint32)(unsafe.Pointer(&pix[i]))), surface.Format)
	return color.RGBA{r, g, b, a}
}

But how do I get a pixel from a Texture?

Should I use SDL_RenderReadPixels and then SDL_GetRGBA?
With this reading I need to create a new pixel pointer each time. Wouldn’t it be more efficient to just read the actual data instead of copying it to the incoming parameter?

The actual data is in VRAM and inaccessible to the CPU. So a copy has to be performed.

This is not a workaround, but a reasonable support for graphics rendering by the GPU, i.e. using hardware acceleration. That’s why computers have a GPU to use it for rendering, because that’s what these components were created for. The CPU is hundreds of times slower and is simply not suitable for serious rendering.

Changing the render target is also not a workaround, because it allows you to create textures as back buffers or as layers of the final frame image and fill them independently (and apply different filters etc.). This gives you a lot of possibilities.

With shaders, there is no other way, but it depends what exactly you want to do with the pixel data. You can use the SDL_QueryTexture and locking/unlocking functions, but this solution is used to write data to the texture, not to read its pixels.

If you need to modify pixel data frequently, use surfaces. However, if you only need to render graphics in a window or on other targets, use textures, because their (hardware) rendering is much faster.

Thanks @furious-programming for this extensive explanation. It really helps.

I have a field in the game which can be eaten by the digger (player) and hobbins (special type of a monster). I get RGBA of a pixel to find out whether the field is eaten or not at some X,Y (just checking for zero’s that mean ‘black’).

So what would be the best fit for this use case: surface or texture? (or a bitmask of the field to track collisions)

Another thing I’ve noticed: the dynamic texture disappears on window resize! Here is the code:

func main() {
	if err := sdl.Init(sdl.INIT_EVERYTHING); err != nil {
		panic(err)
	}
	defer sdl.Quit()

	window, err := sdl.CreateWindow("test", sdl.WINDOWPOS_UNDEFINED, sdl.WINDOWPOS_UNDEFINED,
		800, 600, sdl.WINDOW_SHOWN|sdl.WINDOW_RESIZABLE)
	if err != nil {
		panic(err)
	}
	defer window.Destroy()

	rendererIns, _ := sdl.CreateRenderer(window, -1, sdl.RENDERER_PRESENTVSYNC|sdl.RENDERER_ACCELERATED|sdl.RENDERER_TARGETTEXTURE)

	surf, _ := img.Load("digger.png")
	txt, _ := rendererIns.CreateTextureFromSurface(surf)

	offlineTexture, _ := rendererIns.CreateTexture(sdl.PIXELFORMAT_RGBA8888, sdl.TEXTUREACCESS_TARGET, 800, 600)

	rendererIns.SetRenderTarget(offlineTexture)
	rendererIns.Copy(txt, nil, nil)
	rendererIns.SetRenderTarget(nil)

	running := true
	for running {
		for event := sdl.PollEvent(); event != nil; event = sdl.PollEvent() {
			switch event.(type) {
			case *sdl.QuitEvent:
				println("Quit")
				running = false
				break
			}
		}
		rendererIns.Clear()
		rendererIns.Copy(offlineTexture, nil, &sdl.Rect{0, 0, 800, 600})
		rendererIns.Present()
	}
}

What can be wrong with it?

Update:
I found that
sdl.SetHint(sdl.HINT_RENDER_DRIVER, "opengl")
fixes the problem (I’m on Windows). Weird. I need to further investigate this…

Strange. I always use the default driver myself and let SDL choose the available renderer (I don’t set the SDL_HINT_RENDER_DRIVER hint anywhere). So far I haven’t had any problems with it, applications have always been able to initialize SDL flawlessly on Windows and Linux.

Check if any of the functions you use return an error value and if the SDL_GetError function returns any string with error information at some point.

I decided to leave my implementation as it is: using a surface for the dynamic background, converting it to a texture on each frame.
Reason for that: dynamic textures are not stable!
On Windows (with DirectX) if you resize the window the texture disappears. On my PocketGo (where only software rendering is possible) the background texture is simply torn apart and shifted downwards. I have no wish to debug it further only to find out that there are some SDL-related limitations.
Also my performance measurements revealed that the call CreateTextureFromSurface is not CPU intensive.

Textures are ”stable”, but they are for rendering (copying between textures) and for modifying (writing only) — they should never be used to read their contents (what is explained in the SDL documentation).

I’ve already built a whole game (called Fairtris) based solely on textures and haven’t noticed any problem with the textures. In the end, that’s why I reached for SDL, to have access to the GPU and highly efficient rendering. I don’t think I’ll ever need a software renderer and surfaces (I can program something like this myself, I don’t need libraries for it).

I don’t know what the issue is, but if you are modifying the window size then you should re-render the image in the window. However, I don’t know what texture disappearance you’re talking about.

But it may be intensive in overall.

If you call this function once a day, it doesn’t matter at all, but calling it hundreds or thousands of times in every frame of the game will significantly degrade the performance of the game. The matter is simple — the less operations on memory, the higher the efficiency. If conversions are necessary, consider caching if applicable to your project.

They also disappear for me if they are not in STATIC mode.

Make sure you’re checking for and handling SDL_RENDER_TARGETS_RESET and SDL_RENDER_DEVICE_RESET events.

SDL_RENDER_TARGETS_RESET means your render target textures had to be reset and you need to redo their contents

SDL_RENDER_DEVICE_RESET means the render device handle was reset by the underlying GPU API and now all your textures are invalid; you’ll have to re-create and upload their contents again.

1 Like

@sjr , so that basically means that I need to keep the history of all the changes or somehow accumulate the state of my dynamic texture?

I understand that this approach is dictated by the nature of the interaction with the GPU, but what if I don’t want to complicate the things that much? Should I use a Surface for the backup or use a Surface for the primary drawing and copy it to a volatile Texture every time there is a change in the output?

If you’re uploading texture contents to the GPU every frame then you need to keep a copy of that on the CPU side anyway.

With hardware rendering, the general expectation is that you’re redrawing the frame entirely for each frame. Re-rendering everything to your offscreen texture (assuming that’s all being done by the GPU), etc. Obviously, things are going to be different if you’re making, say, a drawing program or something, but that’s the general idea.

For your case, a Dig Dug clone, I see three options:

  1. Do all your drawing on the CPU side (which, honestly, should be plenty fast with even a halfway decent CPU, especially if you use dirty rects etc)
  2. Forego pixel-perfect collisions, do all rendering with the GPU, and just use a tile map. You upload your tiles as textures to the GPU, keep a tile map on the CPU side, and fire off SDL_RenderCopy() calls based on the tile map.
  3. Ditch SDL_Renderer and write your own hardware renderer. This way you could use shaders. You’d have something like a “dig map” on the CPU side, and this would let you do pixel-perfect collision detection like you want, and then every frame you upload it as a 1-channel texture. Then in your fragment shader you take the dirt texture as one input and your dig map texture as the other, and treat the dig map texture like an alpha channel. Then just draw everything else in separate draw calls using “normal” shaders.

For #3 it might be better to wait for SDL3 and SDL_GPU.

1 Like