I don’t know if anyone looked at this, but a guy made a simple, but
effective benchmark that compares SDL2 vs SFML, and SDL2 seems to win all
benchmark except one, the dynamic text rendering.
The problem with dynamic text rendering is that SDL, at least with SDL TTF,
renders on a NEW surface and that you have to convert the surface to a
texture to use it with the new API.
Here is what the benchmark does, that AFAIK is the only correct way to go
at the moment:
SDL_Surface* surfaceText = TTF_RenderText_Blended(font, oss.str().c_str(),
SDL_Texture* text = SDL_CreateTextureFromSurface(renderer, surfaceText);
// Define the text rectangle
rect.x = 0;
rect.y = i * 30;
rect.w = surfaceText->w;
rect.h = surfaceText->h;
// Blit the text surface
SDL_RenderCopy(renderer, text, NULL, &rect);
I’ve looked at SDL_TTF HG head, but there is not yet an API like:
int TTF_RenderText_Blended_Texture(TTF_Font *font, SDL_Texture *texture,
const char *str, SDL_color color, SDL_Rect *rect);
I think it could be useful.
The idea could be that the texture could work as a clip rect for the
rendering, so the user will have to give the API a big enough texture, or
he will get only a partial rendering (the return code could be used to
detect if the text hits the limits of the texture).
The height/width of the area will be filled with the width/height of the
text displayed (if rect is not null).
I think this will work better if the texture used is a “streaming” texture.
If there is interest, and the API freeze is only relative to SDL2, I can
try to add this feature to SDL_TTF.–