Future 3D API...

OK, so I won’t go into object loaders or how you didn’t even mention FBX and Blender format, probably the two most relevant object file formats in all of gamedev. Let’s just talk about rendering.

You mentioned custom shaders. If you want to render 3D geometry with shaders, people are going to want a way to use the same basic shader for multiple models, but with different parameters. So you need a concept of Materials.

Once people start rendering scenes composed of more than one 3D model, lighting becomes crucial. So now you need an API for Lights. Then you need ways to determine how to deal with multiple Lights shining on the same texel. So you need a Forward or a Deferred rendering algorithm, or preferably both and an API to choose between them.

Once people start lighting scenes, they’re going to discover that rendering even halfway decent lighting is very computationally expensive, and the bulk of that computation is redundant because it doesn’t change from frame to frame. So you need a concept of baking and applying lighting maps.

Once people have scenes, they’re going to want to not just place things in the scene but also move them around. They don’t want solid objects clipping through each other, so you need a concept of a 3D collision geometry simulator. (Even without a full physics engine, which I agree is beyond the scope of SDL, you will need a collision system.)

Once people start moving things around, they’re going to want them to move in a lifelike manner. So you need a concept of Animations, and a way to transition smoothly between Animations.

Once people move objects around, they’ll want to connect objects so they will move together, such as a person picking up a weapon and holding it in their hand so that it will move together with the hand in a visually correct way. So you need a hierarchical Scene Graph API.

And so on. If you do not have APIs for these basic things, no one will want to use SDL for even simple 3D rendering when engines such as Unity and Unreal provide them all, for free, out of the box. As I said above, this is table stakes for 3D. Like giving a mouse a cookie or boiling up a pot of stone soup, once you take the first step there are tons of other things that will inevitably be pulled into even the Minimum Viable Product, and any product that doesn’t end up containing them will not be viable.

You’re describing a game engine, not a rendering API.


Hello, I’m a newbie testing out SDL2.

May I request for a Youtube video of the future of SDL2?

It’s easier to watch a video of what you’re planning to do.

Well yeah. The biggest use case for SDL, by far, is to build games. If people want to build a 3D game, and they see a “rendering engine” that doesn’t have these features, what are they more likely to do?

  1. Build out all those features themselves.
  2. Pass it by and use a different framework that does provide them.

People may claim they don’t like “bloat,” but they always want more features. “Bloat” just means “features I personally don’t need,” but the table stakes stuff is features everyone will need.

The simple fact of the matter is, in the world of programming, Zawinski’s Law reigns supreme: software which can’t provide the features people want is replaced with software that can.

This is either a bad-faith argument or a wild misunderstanding of SDL’s role in the software stack, and I’m not interested in having either conversation with you in this thread.


@icculus you should look at what dawn and webgpu-rs are doing
( dawn - Git at Google ) ( GitHub - gfx-rs/wgpu: Safe and portable GPU abstraction in Rust, implementing WebGPU API. )

Basically they are providing a “webGPU” interface and having backend (native and web) behind the scenes. Thus in the end you only have to target one “render API” and it’ll use DX12, meta, vulkan, GL, webGPU,etc… on the backend. But yes, as the standard is no where near finalized, not quite as useful. But something to watch.

1 Like

Not me! I’d like to see some 3D capabilities added to SDL; currently I use direct calls to OpenGL for this, which works but it would be easier and potentially more portable to do it all through the SDL API. Ryan’s current proposals are pretty much exactly what I want.


Same, the addition of shaders alone opens so many doors for 2D games. I’ll finally be able to do palette swapping on the cheap! Doing things like greyscale or dithering become trivial too.

Even if we needed/wanted those other things. gotta start somewhere and this is it.


Agree this is pretty an exciting development in SDL.

I suppose my question is would you expect this to be available in 2022 or are we talking further out? It feels like quite an undertaking particularly distilling it down to the minimum.

That’s great news, but also a big undertaking.

Meanwhile, I am glad that the 2D API remains and wondering if it is still subject to receive few minor feature updates. I am interested in “signed” Texture blending support in particular. I am asking because it is vastly underestimated which real-time visual effects can be achieved just by a smart use of blending. And I like to push those limits while the code remains cross-platform.

For example, signed-valued textures would enable me to implement normal-map lighting much more efficiently (since I don’t have to separate negative and positive vector-components in seprate textures to handle them differently).

Would be nice to hear an answer on that. Thanks! :slight_smile:

My expectation is this will be a few months of serious work to get to something presentable, so sometime in 2022?

The megagrant paperwork is still being processed, so beyond sketching out basic API design, this hasn’t started yet…but hopefully soon!


I haven’t thought about this at all, and what it would take to add it to various backends, so I can’t say about this feature specifically at the moment.

But while the 2D API won’t go away, I don’t expect to add more major features to it. But simple additions are 100% still possible!

Thanks for considering! Just the small addition of signed texture blending would empower the possibilities of the 2D renderer dramatically. It would be possible to make a 2D game look advanced/modern without the use of shaders (since many neat effects can be simulated with blending and some precomputed stuff), and all that running on an integrated gpu.

Btw. signed texture blending just for SDL_BLENDMODE_MOD and SDL_BLENDMODE_ADD would totally suffice.

Just one last thing while I’m at it. There is one thing possibly even more valuable than signed texture blending. That is “swizzled” texture blending. Just like with SDL_SetTextureColorMod() we could have “SDL_SetTextureSwizzleMod()” where you can swap the channel order before blending (like rgb,brg,gbr). That would make it possible to compute the sum (r + g + b) of a texel without cumbersome workarounds (simply blend the texture three times with different swizzle modes). And we are a good step closer to general purpose computation with blending. Want to multiply a texture by a 3x3 matrix? No problem. Want to do normal-map lighting? No problem.:wink:

The current workaround for all that is to actually store a texture in 3 “formats” (rgb,brg,gbr) and do the same computations on them 3 times (instead of just one). And then adding the three results together.

Now I don’t know if you are already using shaders behind the scenes to do some of the blending. In that case adding swizzled texture blending should be straight forward. Otherwise, if the blending stage is done by the API on all ends, then I can see how it could complicate things as you would need to fire up a shader just to do that swizzling thing.

There will be more graphical functions similar to SFML (VAO, VBO, drawing primitives, camera…) or still all the backend will be up to the user?

1 Like

Please please please fight to keep it simple. There will be tools that are more efficient, prettier, more accepted, but nothing will be simpler than SDL if this is done right. icculus once said that SDL is a hyped up super nintendo, which actually how I’ve explained it to people before because everything is so rectangular. I love what it is and I don’t want to see it go.

I’m actually building a 3d game with SDL now hehe. I would like to see what happens with this 3d API stuff.


I’m excited about this new API, just having multi-platform support for shaders is an incredible thing! I’d like to know if you already have an idea of a possible release, maybe at the start/middle of 2023? I noticed that there is still work in the shaders tool and I know that these things take time so maybe not that soon? Anyway, thanks so much for your work!

1 Like

I would prefer some code example showing how the new functions would work …

I put together a really quick FAQ and link dump document over here, which has a code example or two mentioned:


I’m currently building out the Intermediate Representation portion of the shader compiler (which, since this is more or less flattens down into the bytecode format, can be the last stage of the compiler for now), and then I have to implement shader loading into the GPU API, which is me downplaying a lot of work still to be done, but we’re still getting surprisingly close to a first-cut proof-of-concept.

Once that’s done, the next big step is implementing the other backends (currently I’ve only implemented Metal for the proof of concept), filling in gaps we know we skipped over, like missing texture formats, etc, and adding a basic optimization pass to the compiler (nothing super-serious, but there are some basic-but-very-good optimizations that can be implemented very easily without writing LLVM from scratch, like constant folding and deadcode elimination, so it’s silly not to implement them).

Right now this is getting implemented in my spare time, which isn’t a lot of time right now, and as compiler development goes, some days I look at the next thing I need to write and take a nap instead, you know how it goes. :slight_smile:

But I am making forward progress, and the finish line for something usable is coming up. It might be before 2023!