Future 3D API...

That’s great news, but also a big undertaking.

Meanwhile, I am glad that the 2D API remains and wondering if it is still subject to receive few minor feature updates. I am interested in “signed” Texture blending support in particular. I am asking because it is vastly underestimated which real-time visual effects can be achieved just by a smart use of blending. And I like to push those limits while the code remains cross-platform.

For example, signed-valued textures would enable me to implement normal-map lighting much more efficiently (since I don’t have to separate negative and positive vector-components in seprate textures to handle them differently).

Would be nice to hear an answer on that. Thanks! :slight_smile:

My expectation is this will be a few months of serious work to get to something presentable, so sometime in 2022?

The megagrant paperwork is still being processed, so beyond sketching out basic API design, this hasn’t started yet…but hopefully soon!

2 Likes

I haven’t thought about this at all, and what it would take to add it to various backends, so I can’t say about this feature specifically at the moment.

But while the 2D API won’t go away, I don’t expect to add more major features to it. But simple additions are 100% still possible!

Thanks for considering! Just the small addition of signed texture blending would empower the possibilities of the 2D renderer dramatically. It would be possible to make a 2D game look advanced/modern without the use of shaders (since many neat effects can be simulated with blending and some precomputed stuff), and all that running on an integrated gpu.

Btw. signed texture blending just for SDL_BLENDMODE_MOD and SDL_BLENDMODE_ADD would totally suffice.

Just one last thing while I’m at it. There is one thing possibly even more valuable than signed texture blending. That is “swizzled” texture blending. Just like with SDL_SetTextureColorMod() we could have “SDL_SetTextureSwizzleMod()” where you can swap the channel order before blending (like rgb,brg,gbr). That would make it possible to compute the sum (r + g + b) of a texel without cumbersome workarounds (simply blend the texture three times with different swizzle modes). And we are a good step closer to general purpose computation with blending. Want to multiply a texture by a 3x3 matrix? No problem. Want to do normal-map lighting? No problem.:wink:

The current workaround for all that is to actually store a texture in 3 “formats” (rgb,brg,gbr) and do the same computations on them 3 times (instead of just one). And then adding the three results together.

Now I don’t know if you are already using shaders behind the scenes to do some of the blending. In that case adding swizzled texture blending should be straight forward. Otherwise, if the blending stage is done by the API on all ends, then I can see how it could complicate things as you would need to fire up a shader just to do that swizzling thing.

There will be more graphical functions similar to SFML (VAO, VBO, drawing primitives, camera…) or still all the backend will be up to the user?

1 Like

Please please please fight to keep it simple. There will be tools that are more efficient, prettier, more accepted, but nothing will be simpler than SDL if this is done right. icculus once said that SDL is a hyped up super nintendo, which actually how I’ve explained it to people before because everything is so rectangular. I love what it is and I don’t want to see it go.

I’m actually building a 3d game with SDL now hehe. I would like to see what happens with this 3d API stuff.

2 Likes

I’m excited about this new API, just having multi-platform support for shaders is an incredible thing! I’d like to know if you already have an idea of a possible release, maybe at the start/middle of 2023? I noticed that there is still work in the shaders tool and I know that these things take time so maybe not that soon? Anyway, thanks so much for your work!

1 Like

I would prefer some code example showing how the new functions would work …

I put together a really quick FAQ and link dump document over here, which has a code example or two mentioned:

2 Likes

I’m currently building out the Intermediate Representation portion of the shader compiler (which, since this is more or less flattens down into the bytecode format, can be the last stage of the compiler for now), and then I have to implement shader loading into the GPU API, which is me downplaying a lot of work still to be done, but we’re still getting surprisingly close to a first-cut proof-of-concept.

Once that’s done, the next big step is implementing the other backends (currently I’ve only implemented Metal for the proof of concept), filling in gaps we know we skipped over, like missing texture formats, etc, and adding a basic optimization pass to the compiler (nothing super-serious, but there are some basic-but-very-good optimizations that can be implemented very easily without writing LLVM from scratch, like constant folding and deadcode elimination, so it’s silly not to implement them).

Right now this is getting implemented in my spare time, which isn’t a lot of time right now, and as compiler development goes, some days I look at the next thing I need to write and take a nap instead, you know how it goes. :slight_smile:

But I am making forward progress, and the finish line for something usable is coming up. It might be before 2023!

8 Likes

I am definitely looking forward to this.

4 Likes

I see the last post was from last year. Has there been any updates on this new API. I have a simple 3D API for my project that is currently using SDL_RenderGeometry() and a lot of matrix math. This API is exactly what I have been wanting for the last 10 years.

Before anyone brings up opengl or vulkan, I already know. I did consider using them directly but it would involve rewriting over 20,000 lines of code from using the 2D rendering API. The other reason I use the 2D rendering API is because it gave me a crossplatform rendering solution out of the box with caveats ( SDL_RenderGeometry does not seem to work with emscripten ).

I also have some questions.

  1. Will SDL_Textures be usable with this API or will it have its own texture format?

  2. How much of the 2D rendering API be usable with the new 3D one? For example, will SDL_SetRenderTarget() still be usable when rendering to a texture mapped on a 3D surface?

  3. Will there be a way to switch between a 2D and 3D coordinate system? For example, if I render a 3D scene could I still use 2D coordinates for the HUD and UI elements?

  1. It will have its own texture formats.
  2. It depends. There’s a plan to eventually give the 2D renderer a backend based on the 3D API, but how much you’ll be able to mix and match the two remains to be seen. If you’re doing 3D stuff it’s only one extra step to also do your own 2D stuff.
  3. Yes. You’re providing the vertex descriptors and shaders, doing your own matrix math, etc., so the geometry you’re sending can be anything. 2D coordinates, 3D coordinates, whatever.

The new GPU API will be more akin to Vulkan or Metal. You’ll have to handle your own matrix math, vertex descriptors, shaders, etc.

1 Like

What isn’t working? It seems to work OK here: you can try this link to run a game rendered using SDL_RenderGeometry() (it takes a little while to load).

1 Like

Thanks. I think I was misunderstanding what the API was going to be. I thought it was going to be something similiar to the 2D API in that it provides high level functionality for those who were just trying to implement simple 3D functionality similiar to raylib.

This in itself is still a great thing though since it will save me time on implementing simple 3D functionality without having to worry about platform specific coding.

Did you build SDL2 from source or did you use the one that is installed when you use the “-s USE_SDL=2” flag?

Also, its cool to see another BASIC developer. I am asking about all this for my BASIC dialect RCBasic which uses the SDL2 2D renderer for everything.

I use the -s USE_SDL=2 flag. The only SDL module I’m considering compiling from source is SDL2_ttf; at the moment I’m using -s USE_SDL_TTF=2 but the version that gives me is too old to contain Harfbuzz.

I use a mixture of SDL_Renderer for 2D and direct calls to OpenGL for 3D and shader graphics. Fortunately SDL2 supports this mixed environment and it serves me very well, while OpenGL is available on all the platforms I support (Windows, MacOS, Linux, Android, iOS and Emscripten).

If and when any of those platforms drops OpenGL altogether (the first will probably be MacOS) the cross-platform nature of my app will fall apart! At that point it would have to move to using the new SDL 3D/shader API, if it’s ready, but in practice I’ll be too old for that to affect me personally.

I agree that sdl not getting a simplistic high level 3d api is a missed opportunity. The current 2d one can already be use for 3d, albeit it comes with innefciences ( see Am I using SDL_RenderGeometry correctly? - #3 by Sylvain_Becker )
So better 3d support would not have meant bloating sdl3

As an aside, could you share your 3d api built with RenderGeometry?

Currently half of the API is built in C++ and the other half is built with the BASIC programming language I develop so its not really useful to you. If you want to check it out anyway you can find it here: https://github.com/n00b87/Calamity3D

Note: In RCBasic, the SDL_RenderGeometry() function is called DrawGeometry().

1 Like