So this is a long way off–design hasn’t even started yet–but I’ve got an Epic MegaGrant to add a 3D rendering API to SDL.
The long-winded high-level whats and whys of it are explained here.
I’ll be giving more concrete details later, but the basic plan is:
The basic SDL API looks like other next-gen APIs (Metal, Direct3D 12, Vulkan)…command queues and precooked state objects, not immediate state machines like OpenGL and older Direct3D. This makes the C-callable API pretty compact, because all the power is going to be in…
…shaders. We don’t know what this will look like yet, but the intention is for you to use the same bytecode shader on everything and let SDL figure out how to make it work with the GPU. I’m trying to decide if there should be some ability to use shader source code at runtime, too, but this may or may not happen.
There will be one coordinate system and SDL will manage the details for various backends if things need to be flipped, etc.
This will work like all the other SDL subsystems: it will pick the best way to get this done based on what it finds on the end-user’s system. If they are on an iPhone or Mac, that’ll likely be Metal, on Windows 11 it will be Direct3D 12, Vulkan on Linux, etc.
Ideally we will be throwing away most (but not all) of the 2D rendering API backends, and replacing them with one that talks to this new API behind the scenes. The existing 2D API is not being deprecated, because it’s still extremely useful for certain types of games, and getting older games off the CPU and onto the GPU, etc. It’s likely there will be hooks into the new API, like how you can ask the 2D renderer for a GL texture or a Metal command buffer, in case you mostly want simple 2D but want to add on top of that (and as a benefit, you will no longer have to write to a specific low-level API to make this work).
Here are some things that are not part of the plan:
A software renderer. It’s just not practical with shaders.
Older 3D API backends. If we are lucky, we might get support for very new OpenGL, but the idea is this is going to map to the new philosophies and features of next-gen APIs, and bending over backwards to make (say) OpenGL 1.1 work is just not likely to happen. Which is to say, you should expect this API to fail to init if you have an egregiously old system, or try to run it over Windows Remote Desktop or whatever.
Letting people mix and mingle between the new API and the underlying native API. The primary need for this in the 2D API was to use OpenGL magic directly on top of other 2D rendering, but we’re hoping those limitations won’t exist to need bypassing with the new API. Letting people hook in to OpenGL directly caused a lot of confusion, complexity, and special-case API additions to support it. Not to mention you were in trouble if the renderer API one day started supporting Vulkan instead.
There are parts of the next gen things we aren’t adding right now because it feels like biting off more than we can chew: compute shaders, ray tracing, etc. These things might be added later (AND ALSO MIGHT NOT), but for a first cut, I believe this will be a quantum leap forward in the rendering functionality we currently offer, more than enough for both small indie games and all but perhaps the most bleeding edge triple-A titles.
An optimizing shader compiler. I do not intend to turn SDL into a compiler project or import a dependency on LLVM. I expect to lose some performance to this, but not a significant amount. It’s not impossible that an external project that offers an offline, optimizing compiler, could arrive at some point.
You might have questions, and I think it’s important to say that in these early times the answers will change as we find out parts of this plan are a bad idea and I pivot on various details. But I can provide my best-laid plans at the current moment for any questions.
Wow, it’s a nice surprise. But, why not WebGPU and how about multi-threading? I’d guess the point would be OpenGL support though (and both can’t be supported in good manner thus we need new one.)
The WebGPU standard is not finalized yet, but I assume eventually we will have a WebGPU backend for the new SDL API that the Emscripten port can use.
and how about multi-threading?
Multithreading is considered a crucial element of the next-gen APIs, so we will support it as well. We might have a small requirement like “the final framebuffer present has to happen from the main thread, but command buffers may be encoded and submitted from any thread,” to make sure we’re safe if a platform requires that. I haven’t decided on the exact plan yet.
One could theoretically do this, but it wouldn’t be hard to render textured quads directly and avoid the extra complexity.
I think it would be more desirable to do a little 3D in a mostly 2D game (think about the rotating mountain menu in the otherwise 2D game Celeste, or maybe adding some cool post-processing effect to any 2D game).
This is very exciting to read about, I’m looking forward to seeing what gets made!
Do you know yet what these changes might look like to a casual user of SDL? e.g. would this maybe bump the version to SDL3, or would the new API perhaps be a new subsystem that needs Initing?
Yes, but please consider bumping the middle number (‘minor version’) because so far every release of SDL2 has apparently been just a ‘patch’ (which implies nothing added either). I’ve never understood this misuse of the semantic versioning style.
We should probably document the threading rules for existing backends while we’re at it. When asked if XYZ render or video backend supports rendering from another thread, the answer tends to be “if it works when you do that, then yes”. Sometimes it depends on both video and renderer backends together.
We had some code in the Wayland backend (handling the surface suspension stuff) that assumed that nobody else would be reading from the display connection at the same time. That turned into random deadlocks in 2.0.16. Rendering from another thread worked before, so somebody decided to take a dependency on it.
The renderer part of that is documented in the SDL_Render header file – SDL_Render can only be used from the main thread. (Whatever happens if you break that rule is implementation/platform-dependent.)
SDL’s windowing APIs are pretty much the same thing. For full cross-platform compatibility you can only use them on the main thread. What happens when you break that promise depends on the platform. On macOS and iOS, the OS will prevent you from making calls off the main thread.
I don’t think we can wind back the clock on the existing apps out there, but we can at least document the current state of things to make sure we don’t step on any landmines with similar stuff in the future.
I would question whether the contents of a header file qualifies as ‘documentation’. Everything of importance should be in, or at least linked from, the wiki.
Just adding basic ideas, that I gathered and thought a while ago, and that would still fit with the 2d backends:
Provide a shader manager in the generic layer (eg SDL_render.c), to:
lighten back-end from caching/storing/compiling(?) their internal shaders.
allow user to interact with it, for instance to use its own shaders.
and do something:
SDL_PushUserShader()
SDL_RenderCopy()
SDL_PopUserShader()
Won’t be supported by the SW renderer of course, but if it’s a no-op and the shader isn’t too fancy, a game could remain playable.
That can be extended, with some function to add a uniform variable SDL_RenderAddUniform()
Add a SDL_RenderGeometry 3d version, that would draw 3d vertices.
Add an API to render .OBJ models, SDL_RenderObj(), to interface with whatever library/tool that can provide 3d objects.
That provides some 3d capabilities, but not sure if this makes senses technically will all back-ends.
Of course this may be not as good as real GL with all the possible customization, and if this match a real user need.
All function documentation automatically bridges between the headers and wiki now; commits that change the doxygen will automatically land on the wiki and vice versa.
For example, this commit:
Automatically became this in the wiki:
This doesn’t yet cover every part of the headers, but already the vast majority of them are bidirectionally bridged like this.
I think there’s going to be a lot of decisions about what adds functionality and what adds usability.
I’m hesitant to add any 3D things to the existing 2D API if this works out, but I do like the idea of users being able to pick the level of complexity they can handle: 2D, 3D, or some helper functions on top of 3D to make it easier.
There’s a real risk in bloating the API though. We’re going to have to be careful.
“Risk” implies uncertainty. I would call it an inevitability: to even reach what’s considered “table stakes” levels of 3D functionality today, the amount of API surface you’re going to need to add will dwarf the entirety of what SDL is right now. If you don’t, you’ll end up with a 3D product no one will want to use.
I vote in favour of keeping the bloat down. This ethos is what makes SDL so great and sets it apart from other libraries / engines.
Speaking of which, when will the deprecated functionality of SDL2 be removed? A lot of new stuff has been added and the size is creeping up. Stuff like the addition of float coordinates make the integer functions redundant. I hope SDL3 isn’t going to remain a myth.