Cost of creating textures vs. holding onto them

Yes, within reason. Nothing is stopping you from having more than one atlas texture.

SDL’s primary use seems to be for games and emulators, and maybe video players, game-related tools, and things of that nature. People try to do other things with it, sure, but they face a significant uphill battle. If anything, I would classify your application as an emulator.

I specifically mentioned QT and wxWidgets, which are cross platform. Most users would expect a word processing app to have a level of text layout and rendering that would be very difficult to reimplement yourself from the ground up.

This is kind of a dumb derail.

It’s not, but trying to explain that (especially if you’re not familiar with the history) would be even more off-topic.

Now that SDL2_ttf has integrated the HarfBuzz text shaping engine I think it would be entirely practical to implement a respectable word processor.

The entire atlas is empty at the start, with an initial size that’s suitable for typical amounts of characters at that font size (but not big enough that it’d use a significant amount of memory, except with extremely huge font sizes).

When the text rendering function iterates over characters in a given input string, it checks if they’re already in the atlas. If so it just uses their UVs etc and continues on. If not, the character is rasterized using whatever API you have (FreeType, etc) and that rasterized result is copied into a free space in the texture atlas, and its information is stored.

If there’s no free space in the texture atlas, my system currently destroys the old texture and creates a new one at a larger size, and restarts the character iteration loop. If the max size supported by the system is reached, it falls back to using multiple textures. This is slower to render because switching textures and having multiple draw calls is slow, but it’s also a very very rare situation in practice. The typical max texture size on PC is 16384x16384.

Right now my system doesn’t use Harfbuzz or an equivalent (to support more complex languages etc), but I’ve been researching that and I think the overall idea will still work fine.

Rasterizing a character and uploading it to the texture atlas doesn’t add significant overhead to the system because it tends to only happen on starting frames and even though it’s more expensive than not doing it, in the frames that it happens it’s not expensive enough to cause stutters in practice.

I’m operating in a cross-platform environment, and everything has to run on a mobile device (Android, iOS) on a Raspberry Pi or in a browser (Web Assembly). Therefore the maximum texture size that I can safely use is more like 4096 x 4096, which in area terms is 16-times smaller than yours!

To make things worse, on a mobile device I may be rendering to a ‘retina’ display at 300 dpi or so - it’s the only way to get really sharp text - so when you take the maximum texture size into account the number of glyphs you can fit in an atlas may be quite small, particularly so with larger font sizes.

The alternative approach of using a separate texture for each glyph works well for me, I know rendering is slower, but because I’m using a ‘target texture’ I don’t have to render the characters every frame. Indeed in a word-processing application the target texture could correspond to the entire page (e.g. A4).and characters would only need to be rendered when initially entered (or on a re-formatting operation).

That’s totally fine, the system handles that case with no issues. Maximum texture size is determined at runtime rather than being hard-coded, and a 4k atlas can fit many many glyphs even with high pixel densities. I rarely if ever see 4K atlases unless the individual characters are massive on-screen to users, in which case there typically aren’t many characters.

Memory use won’t be significantly higher than with free-floating individual glyph textures either.

I’m not trying to force you to use this technique or anything, just explaining it. The only real downside I know of is the complexity of its code (which has been very worth it for me, but YMMV of course). Or if you’re using an extremely computationally expensive glyph generation algorithm, like SDFs. But that’s another topic. :slight_smile: