How to preset a streaming texture with the shown pixels so that further rendering operations are merged with the existing pixels

I want to combine SDL’s hardware rendering with some special software rendering. For this I use a streaming texture for the software rendering part.

Rendering only with hardware or software is easy, but how to best combine both approaches?

The problem is, that the software renderer needs to see the current pixels but the pixels of the streaming texture are not preset. And the streaming texture is stated to be read-only, so I don’t have any idea how to preset the pixels.

Further, I didn’t see anything where I could specify a blit operation like OVER or something like that, which would be used when the texture gets unlocked.

It looks like a streaming texture always gets blitted to the screen by completely replacing the prior content.

Keep the pixel data in SDL_surface or just in a custom buffer that you allocate yourself. During software rendering, update the state of this surface/buffer, and when you want to send the data to the GPU, copy the data from the surface/buffer to this texture.

Something like SDL_SetRenderDrawBlendMode? Also, you can compose custom blend mode, see the SDL_ComposeCustomBlendMode function.

Which pixel data do you mean? I mean the window’s pixel data, which the HW renderer can manipulate without the software renderer seeing it.

The docs state that this applies to rendering (line/rect) but doesn’t state anything about BlitSurface operations.

So how do you do this software rendering? What do you use?

It’s currently not implemented/wrapped as a SDL_Renderer but rather using the pixelbuffers directly. Works so far as long as either SDL or B2D draw the picture.

Next step is to use both to let SDL draw a blue rectangle and B2D a red frame on top of it. So that I can see the blue SDL pixels below and in the middle of the frame. Like this:

If speed isn’t a concern, you can read the pixels from the renderer using SDL_RenderReadPixels().

See SDL_SetSurfaceBlendMode().

I don’t think that reading single pixels makes a lot of sense, and yes, speed is relevant.

I’ll experiment with the blend mode stuff and see if we can create a simple compositor.

It doesn’t read single pixels, it’s SDL_RenderReadPixels() plural not SDL_RenderReadPixel()! Although it’s best avoided, because it can be relatively slow, sometimes there’s no alternative - and you did ask how to get the existing pixels from a texture.