I’ve got a word processor (http://cowlark.com/wordgrinder), and I’ve got fed up with having to write multiple pieces of rendering code for X11, Windows, OSX etc and want to use SDL instead.
Doing the driver was remarkably straightforward. I’m aware that SDL_ttf doesn’t cache glyphs so I’m using grimfang4’s SDL_fontcache layer instead. It actually works pretty well.
Unfortunately, while it works well on a desktop PC, on a low-end device such as a Raspberry Pi with Mali chipset it’s simply too slow — with a full-screen window it can’t keep up when scrolling. The old Xlib code could. So, I need to speed it up somehow. I’ve got a surprising number of Raspberry Pi users (and my preferred writing laptop is an ARM Mali device too).
The current rendering code is pretty naive; whenever the screen needs to be updated, I clear and redraw every character on the screen. I gather this is the preferred way to do it these days. See wordgrinder/dpy.c at sdl · davidgiven/wordgrinder · GitHub. For an average full-screen window that’s going to be, say, 150x50 characters = 7500 individual calls to SDL_RenderCopyEx(). Is that a lot? I really can’t tell any more…
What can I do? Is it with switching to something like SDL_gpu (or pure OpenGL, but I’d rather use something which abstracts away between GL and GLES)? My impression was that the SDL renderer is supposed to be natively hardware accelerated, but I’ve seen references here than SDL_gpu is way faster. Is there any way to verify that hardware acceleration is actually being used?