What is wrong with SDL_TTF?

Hello everybody!

I’m trying to use SDL_TTF to render a lot of text. But it’s extremally
slow. On Sempron 2500+ 256 mb ram GeForce 4 it is running great. On
Duron 800 128 mb ram Riva 2 I have about 50 fps - it’s acceptable. But
on Pentium III 450 mhz 320 mb ram I have for 10 to 25 fps.

  1. Why it’s so slow?
  2. It’s possible to render more than one line of text on one surface?
  3. Is there any portable alternative way to render text with SDL?

Thanks a lot.----------------------------------------------------------------------
Prawie 40.000 samochodow na sprzedaz! >>> http://link.interia.pl/f18b2

Hello everybody!

I’m trying to use SDL_TTF to render a lot of text. But it’s extremally
slow. On Sempron 2500+ 256 mb ram GeForce 4 it is running great. On
Duron 800 128 mb ram Riva 2 I have about 50 fps - it’s acceptable. But
on Pentium III 450 mhz 320 mb ram I have for 10 to 25 fps.

  1. Why it’s so slow?

TTF is a vectorial graphics format, so a bitmap representation must be
created on the fly. This consumes CPU. Though it does some caching, there is
always some CPU impact.

  1. It’s possible to render more than one line of text on one surface?

If the render function of SDL_TTF doesn’t support breaklines, you could
easily add a wrapper function to support that, as well as allignment.
You’ll probably need a wrapper function to add some appealing visual effect,
like gradient, to the fonts anyway.

  1. Is there any portable alternative way to render text with SDL?

Yes, bitmaps fonts. Some dude just announced a SDL tutorial that has a
section dedicated to that:
http://lazyfooproductions.com/SDL_tutorials/lesson23/index.php

SuperTux also uses bitmap fonts (check latest stable version, svn is just too
confusing right now):
http://supertux.berlios.de/

The cons about bitmap font is that if you want different size you have to do
them before (scaling won’t look nice) and there aren’t as many available as
TTF, so you might need to draw them yourself. The pros is that it’s faster
and looks prettier (well, depends on the artist ;)).

If you want to be able to draw the font in a lot of different colors, just
make a monochrom bitmap and then, on the loading function, replace the black
(or whatever) pixel by the passed color.

You can also use something like the SDL_gfx’s gfxPrimitivesSetFont(). It uses
a vectorial representation of its own that draw lines and stuff for the
different characters. It will look a lot “pixeled” though, but it works nice
in some games, like in a space invaders clone.

Cheers,
RicardoEm Segunda, 12 de Setembro de 2005 15:46, o kds71 escreveu:

Thanks a lot.


The more we disagree, the more chance there is that at least one of us is
right.

Hello everybody!

I’m trying to use SDL_TTF to render a lot of text. But it’s extremally
slow. On Sempron 2500+ 256 mb ram GeForce 4 it is running great. On
Duron 800 128 mb ram Riva 2 I have about 50 fps - it’s acceptable. But
on Pentium III 450 mhz 320 mb ram I have for 10 to 25 fps.

  1. Why it’s so slow?

Because it is doing some very fussy software rendering. That takes time,
it can take a lot of time.

The way to make it fast is to render the individual glyphs and store
them. Then draw text by blitting the individual glyphs to the screen. If
you are using OpenGL, then store them a textures and then texture map
them into place. You either render each glyph as its own surface, or you
can render the entire font into a surface and blit the appropriate parts
of you font surface.

The metrics you need to use to draw the text are all available through
SDL_TTF.

If you search on this subject and with my email address in the SDL
mailing list you will find a C++ class that I posted here awhile back
that implements this technique.

  1. It’s possible to render more than one line of text on one surface?

Not that I know of. And, like I said, that wouldn’t be a good way to
solve the problem.

  1. Is there any portable alternative way to render text with SDL?

Sure, many of them. Lots of SDL font libraries out there. You can find
one I wrote at http://gameprogrammer.com/fastevents/fastevents1.html
but, really, rendering the individual glyphs and bliting them into place
is great for most applications

	Bob PendletonOn Mon, 2005-09-12 at 16:46 +0200, kds71 wrote:

Thanks a lot.


Prawie 40.000 samochodow na sprzedaz! >>> http://link.interia.pl/f18b2


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

The way to make it fast is to render the individual glyphs and store
them. Then draw text by blitting the individual glyphs to the screen. If
you are using OpenGL, then store them a textures and then texture map
them into place. You either render each glyph as its own surface, or you
can render the entire font into a surface and blit the appropriate parts
of you font surface.

Now I need to render text in SDL, not using OpenGL. But some time ago I
was writting application in SDL and OpenGL (using SDL_TTF, too). I
quickly realised that I will need to render each glyph separatly. So I
wrote class TFont, which loads and initializes font, creates textures,
OpenGL lists and so on. This class provides methods for render text and
metrics. Code was testing on same machines - and:

  • on Sempron 2500+ (256 mb ram, geforce 4) - 60 fps
  • on Pentium II 450 mhz (320 mb ram, geforce 2) - 55-60 fps
  • on Duron 800 mhz (256 mb ram, riva 2) - 3-4 fps!!

So results was terrible, too.

This was a digression, but I will be happy when someone tell me why it
was so slow on duron (when I render only two digits on screen - FPS
rating - I have about 25 fps)?

Back to topic: thanks, I hope that will work. I think that it will be
easy to adopt my TFont and TGlyph classes from OpenGl code to SDL
surfaces :slight_smile:

Regards----------------------------------------------------------------------
Seks, zdrowie, uroda, praca… >>> http://link.interia.pl/f18b3

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20050912/b2c47a1f/attachment.txt

Hello everybody!

I’m trying to use SDL_TTF to render a lot of text. But it’s extremally
slow. On Sempron 2500+ 256 mb ram GeForce 4 it is running great. On
Duron 800 128 mb ram Riva 2 I have about 50 fps - it’s acceptable. But
on Pentium III 450 mhz 320 mb ram I have for 10 to 25 fps.

  1. Why it’s so slow?
  2. It’s possible to render more than one line of text on one surface?
  3. Is there any portable alternative way to render text with SDL?

Thanks a lot.

  1. It all depends on how are you rendering your text, especially -
    whether you are calling TTF_RenderText etc. every frame for the same
    text. It’s much better to identify texts that don’t change too often and
    cache them. You know, memory allocation (which TTF_RenderText is doing)
    is very slow, and doing it every frame, for every possible line of text,
    will kill your framerate.

  2. AFAIK - no.

Koshmaar

[…]

Now I need to render text in SDL, not using OpenGL. But some time
ago I was writting application in SDL and OpenGL (using SDL_TTF,
too). I quickly realised that I will need to render each glyph
separatly. So I wrote class TFont, which loads and initializes font,
creates textures, OpenGL lists and so on. This class provides
methods for render text and metrics. Code was testing on same
machines - and:

  • on Sempron 2500+ (256 mb ram, geforce 4) - 60 fps
  • on Pentium II 450 mhz (320 mb ram, geforce 2) - 55-60 fps
  • on Duron 800 mhz (256 mb ram, riva 2) - 3-4 fps!!

Well, assuming the two faster machines are limiting the frame rate to
the 60 Hz refresh rate (retrace sync), there must be something very
wrong with the setup or something on the Duron.

Are you sure the driver actually provided hardware acceleration for
the pixel format you were using? Did other OpenGL software run with
decent frame rates on that system? Maybe your code relied on some
feature that wasn’t supported by the Riva 2 card… Some odd blending
mode, texture filtering or something that that chip cannot
accelerate?

Either way, with a design like that (prerendered glyphs), SDL_TTF
should be out of the loop (literally!) once you’ve rendered your
fonts. So, if it’s still slow, it should have nothing to do with TTF
rendering, unless you put some code in the wrong place…

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Monday 12 September 2005 22.20, kds71 wrote:

Koshmaar wrote:

  1. It all depends on how are you rendering your text, especially -
    whether you are calling TTF_RenderText etc. every frame for the same
    text. It’s much better to identify texts that don’t change too often and
    cache them. You know, memory allocation (which TTF_RenderText is doing)
    is very slow, and doing it every frame, for every possible line of text,
    will kill your framerate.

Well, I need to render text every frame, because I use it in widgets
such as memo or list - text is changing very often.

bloomp at rpi.edu wrote:
Yes, bitmaps fonts.

I think we are talking about same solution - fonts I use are bitmap
fonts, indeed (I’m talking about my opengl code). Only diffrence is I
don’t load font from bitmap but I create bitmap from TTF on TFont object
initialization. As I wrote in previous e-mail - it works terrible on
duron 800 (256 mb ram, riva 2), but it was written in OpenGL. Now I will
do it without OpenGL and will see results.

Regards----------------------------------------------------------------------
Prawie 40.000 samochodow na sprzedaz! >>> http://link.interia.pl/f18b2

Well, assuming the two faster machines are limiting the frame rate to
the 60 Hz refresh rate (retrace sync), there must be something very
wrong with the setup or something on the Duron.

It’s strange, but it was tested on two similar machines - both with
duron 800 - both 3-4 fps.

Are you sure the driver actually provided hardware acceleration for
the pixel format you were using? Did other OpenGL software run with
decent frame rates on that system? Maybe your code relied on some
feature that wasn’t supported by the Riva 2 card… Some odd blending
mode, texture filtering or something that that chip cannot
accelerate?

It is possible. I will check it and try other settings…

Either way, with a design like that (prerendered glyphs), SDL_TTF
should be out of the loop (literally!) once you’ve rendered your
fonts. So, if it’s still slow, it should have nothing to do with TTF
rendering, unless you put some code in the wrong place…

Hmm… I just call the opengl lists (my code is similar to code I found
on NeHe).

Well, probably it is about texture filtering. AFAIR I used linear filtering.

Regards----------------------------------------------------------------------
Seks, zdrowie, uroda, praca… >>> http://link.interia.pl/f18b3

I just finished writing new TFont and TGlyph classes. You can find it in
files I included.

Now I can test in only on sempron 2500+, so I dont know I solved problem
or not…

There is a one strange thing - when I inits video like that:

screen = SDL_SetVideoMode (800, 600, 16, SDL_SWSURFACE | SDL_FULLSCREEN);

and use

SDL_UpdateRect (screen, 0, 0, 0, 0);

I have about 80 fps. But when I inits video like that:

screen = SDL_SetVideoMode (800, 600, 16, SDL_HWSURFACE | SDL_DOUBLEBUF |
SDL_FULLSCREEN);

and use

SDL_Flip (screen);

I have only 10 fps. Why? Maybe I’m doing something wrong?

Regards----------------------------------------------------------------------
Seks, zdrowie, uroda, praca… >>> http://link.interia.pl/f18b3
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed…
Name: fontman.h
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20050913/06a7beb9/attachment.txt
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed…
Name: fontman.cpp
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20050913/06a7beb9/attachment.asc

In my opinion, instead of converting TTF fonts to bitmaps in run-time you
should do so before to avoid extra dependecy, make the engine simpler and use
of less cpu and memory.
Making a program to convert TTF to a bitmap is very simple, as well as making
it accept arguments and add graphical effects.

Cheers,
RicardoEm Segunda, 12 de Setembro de 2005 18:32, o Bob Pendleton escreveu:

On Mon, 2005-09-12 at 16:46 +0200, kds71 wrote:

Hello everybody!

I’m trying to use SDL_TTF to render a lot of text. But it’s extremally
slow. On Sempron 2500+ 256 mb ram GeForce 4 it is running great. On
Duron 800 128 mb ram Riva 2 I have about 50 fps - it’s acceptable. But
on Pentium III 450 mhz 320 mb ram I have for 10 to 25 fps.

  1. Why it’s so slow?

Because it is doing some very fussy software rendering. That takes time,
it can take a lot of time.

The way to make it fast is to render the individual glyphs and store
them. Then draw text by blitting the individual glyphs to the screen. If
you are using OpenGL, then store them a textures and then texture map
them into place. You either render each glyph as its own surface, or you
can render the entire font into a surface and blit the appropriate parts
of you font surface.

The metrics you need to use to draw the text are all available through
SDL_TTF.

If you search on this subject and with my email address in the SDL
mailing list you will find a C++ class that I posted here awhile back
that implements this technique.

  1. It’s possible to render more than one line of text on one surface?

Not that I know of. And, like I said, that wouldn’t be a good way to
solve the problem.

  1. Is there any portable alternative way to render text with SDL?

Sure, many of them. Lots of SDL font libraries out there. You can find
one I wrote at http://gameprogrammer.com/fastevents/fastevents1.html
but, really, rendering the individual glyphs and bliting them into place
is great for most applications

  Bob Pendleton

Thanks a lot.


We have more to fear from the bungling of the incompetent than from the
machinations of the wicked.

Ricardo Cruz wrote:

In my opinion, instead of converting TTF fonts to bitmaps in run-time you

should do so before to avoid extra dependecy, make the engine simpler and
use
of less cpu and memory.
Making a program to convert TTF to a bitmap is very simple, as well as
making
it accept arguments and add graphical effects.

Well, I don’t catch the diffrence beetwen your solution and my code. I actually convert TTF to bitmaps in application initialization - class TFont loads TTF in constructor, creates bitmaps and close TTF. I included this code in my previous e-mail (fontman.cpp and fontman.h) - take a look :wink:

Regards----------------------------------------------------------------------
TOUR DE POLOGNE: oficjalny serwis >>> http://link.interia.pl/f18b5

if you are getting low fps and you aren’t rendering the TTF’s on the fly, it
sounds like you need to make sure all your surfaces are in the same format
as the screen so SDL doesnt have to do on the fly conversion every frame
(slow!).

check out SDL_DisplayFormat and SDL_DisplayFormatAlpha (if you are using
alpha).> ----- Original Message -----

From: kds71@interia.pl (kds71)
To: "A list for developers using the SDL library. (includes SDL-announce)"

Sent: Tuesday, September 13, 2005 5:57 AM
Subject: Re: [SDL] What is wrong with SDL_TTF?

Ricardo Cruz wrote:

In my opinion, instead of converting TTF fonts to bitmaps in run-time you

should do so before to avoid extra dependecy, make the engine simpler and
use
of less cpu and memory.
Making a program to convert TTF to a bitmap is very simple, as well as
making
it accept arguments and add graphical effects.

Well, I don’t catch the diffrence beetwen your solution and my code. I
actually convert TTF to bitmaps in application initialization - class TFont
loads TTF in constructor, creates bitmaps and close TTF. I included this
code in my previous e-mail (fontman.cpp and fontman.h) - take a look :wink:

Regards


TOUR DE POLOGNE: oficjalny serwis >>> http://link.interia.pl/f18b5


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I just finished writing new TFont and TGlyph classes. You can find it in
files I included.

Now I can test in only on sempron 2500+, so I dont know I solved problem
or not…

There is a one strange thing - when I inits video like that:

screen = SDL_SetVideoMode (800, 600, 16, SDL_SWSURFACE | SDL_FULLSCREEN);

and use

SDL_UpdateRect (screen, 0, 0, 0, 0);

I have about 80 fps. But when I inits video like that:

screen = SDL_SetVideoMode (800, 600, 16, SDL_HWSURFACE | SDL_DOUBLEBUF |
SDL_FULLSCREEN);

and use

SDL_Flip (screen);

I have only 10 fps. Why? Maybe I’m doing something wrong?

Yeah, you are using a hardware buffer. Writing to a hardware buffer in
software is very slow. This is especially true when you are doing alpha
blending or any other operation that requires you to read from the
hardware buffer.

Also, does you hardware actually support a 16 bit video format? Did you
check to see that you are getting 16 bit format? And, are your images in
the same 16 bit format as your screen. If not, then SDL may be
converting your images on the fly and that will slow you way down.

	Bob PendletonOn Tue, 2005-09-13 at 01:20 +0200, kds71 wrote:

Regards


Seks, zdrowie, uroda, praca… >>> http://link.interia.pl/f18b3
plain text document attachment (fontman.h)
#ifndef FONTMAN
#define FONTMAN

#include <stdio.h>
#include “sdl/sdl.h”
#include “sdl/sdl_ttf.h”
#include “tstring.h”

class TGlyph
{
private:
SDL_Surface *glyph_;
int width_;
int height_;
public:
~TGlyph ();
void assign (Uint16 c, TTF_Font *font, const SDL_Color &color);
void render (SDL_Surface *dest, SDL_Rect &rect) const;
int width () const;
int height () const;
};

class TFont
{
private:
TGlyph glyphs_[112];
TString *chars_;
int height_;
Uint16 unicode_ (char) const;
public:
TFont (const char *path, int size, const SDL_Color &color);
~TFont ();
void render (SDL_Surface *dest, int x, int y, const TString &text) const;
int width (const TString &text) const;
int height () const;
};

#endif
plain text document attachment (fontman.cpp)
#include “fontman.h”

// TGLYPH

void TGlyph::assign (Uint16 c, TTF_Font *font, const SDL_Color &color)
{
Uint16 text[2];
text[0] = c;
text[1] = 0;

glyph_ = TTF_RenderUNICODE_Blended (font, text, color);
width_ = glyph_->w;
height_ = glyph_->h;
}

TGlyph::~TGlyph ()
{
if (glyph_)
SDL_FreeSurface (glyph_);
}

void TGlyph::render (SDL_Surface *dest, SDL_Rect &rect) const
{
SDL_BlitSurface (glyph_, 0, dest, &rect);
rect.x += width_;
}

int TGlyph::width () const
{
return width_;
}

int TGlyph::height () const
{
return height_;
}

// TFONT

Uint16 TFont::unicode_ (char c) const
{
Uint16 x = static_cast ©;

if (c == ‘?’) x = 0x0105;
if (c == ‘?’) x = 0x0104;
if (c == ‘?’) x = 0x0107;
if (c == ‘?’) x = 0x0106;
if (c == ‘?’) x = 0x0119;
if (c == ‘?’) x = 0x0118;
if (c == ‘?’) x = 0x0142;
if (c == ‘?’) x = 0x0141;
if (c == ‘?’) x = 0x0144;
if (c == ‘?’) x = 0x0143;
if (c == ‘?’) x = 0x00f3;
if (c == ‘?’) x = 0x00d3;
if (c == ‘?’) x = 0x015b;
if (c == ‘?’) x = 0x015a;
if (c == ‘?’) x = 0x017a;
if (c == ‘?’) x = 0x0179;
if (c == ‘?’) x = 0x017c;
if (c == ‘?’) x = 0x017b;

return x;
}

TFont::TFont (const char *path, int size, const SDL_Color &color)
{
TTF_Font *font;
font = TTF_OpenFont (path, size);
if (!font)
{
fprintf (stderr, “Failed TTF_OpenFont: %s\n”, TTF_GetError ());
exit (1);
}

chars_ = new TString (" abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789`!@#$%^&*()-_=+\|[]{};:’",<.>/???");

for (int i = 0; i < 112; i++)
{
Uint16 c = unicode_ ((*chars_) [i]);
glyphs_[i].assign (c, font, color);
}

height_ = TTF_FontHeight (font);
TTF_CloseFont (font);
}

TFont::~TFont ()
{
delete chars_;
}

void TFont::render (SDL_Surface *dest, int x, int y, const TString &text) const
{
SDL_Rect tmp_rect = {x, y, 0, 0};

for (int i = 0; i < text.length (); i++)
{
int glyphno = chars_->pos (text[i]);
glyphs_[glyphno].render (dest, tmp_rect);
}
}

int TFont::width (const TString &text) const
{
int result = 0;
for (int i = 0; i < text.length (); i++)
{
int glyphno = chars_->pos (text[i]);
result += glyphs_[glyphno].width ();
}

return result;
}

int TFont::height () const
{
return height_;
}


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

What I suggested was to convert the TTF files into a bitmap format files and
then use the latter in your game. As you do, you have to convert the
vectorial fonts into raster bitmaps at run-time (yes, initializalition
happens during run-time).
It would avoid making your code more complex, the extra dependencies, the
need for more processor (and yes, I understood it is only once per run), and
possible differences on the TTF renderer.

If it worths it or not, it’s your call. For me, it doesn’t make much sense to
convert TTF into bitmap on the fly.

Cheers,
Ricardo.Em Ter?a, 13 de Setembro de 2005 13:57, o kds escreveu:

Ricardo Cruz wrote:

In my opinion, instead of converting TTF fonts to bitmaps in run-time
you

should do so before to avoid extra dependecy, make the engine simpler and
use
of less cpu and memory.
Making a program to convert TTF to a bitmap is very simple, as well as
making
it accept arguments and add graphical effects.

Well, I don’t catch the diffrence beetwen your solution and my code. I
actually convert TTF to bitmaps in application initialization - class TFont
loads TTF in constructor, creates bitmaps and close TTF. I included this
code in my previous e-mail (fontman.cpp and fontman.h) - take a look :wink:

Regards


QOTD:
“It was so cold last winter that I saw a lawyer with his
hands in his own pockets.”

In my opinion, instead of converting TTF fonts to bitmaps in run-time
you should do so before to avoid extra dependecy, make the engine
simpler and use of less cpu and memory. Making a program to convert
TTF to a bitmap is very simple, as well as making it accept arguments
and add graphical effects.

Cheers,
Ricardo

Perhaps its my laptop hardware (PII, 300mhz no hardware accel support),
but I’ve found that blitting bitmap fonts for each frame still gives me
a noticeable slowdown.

I ended up storing a copy of the text in memory, and blit to the the
main display only when that part of the screen needs to be redrawn.
Much more friendly for the CPU at the cost of a little memory.

YMMV,
JesseOn Tue, 13 Sep 2005, Ricardo Cruz wrote:


Any sufficiently advanced incompetence is indistinguishable from malice.
- Vernon Schryver

Well, of course, you can never eliminate the cost of the blit to the
screen, unless you simply don’t do it! :slight_smile:

However, if you have a lot of stuff moving around and/or a scrolling
background, there’s no way to avoid redrawing the text every frame -
and that’s when it matters how you do it.

(And the fastest way to do it is probably prerendering the whole
message into a surface, DisplayFormat() it, with RLE acceleration
where appropriate, and then just do a single blit per update. This
will not work if you need to generate new messages while the time
critical main loop is running, though. Well, you can do that too,
but that’s a bit more complicated…)

//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,… |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Tuesday 13 September 2005 19.38, Jesse Meyer wrote:

On Tue, 13 Sep 2005, Ricardo Cruz wrote:

In my opinion, instead of converting TTF fonts to bitmaps in
run-time you should do so before to avoid extra dependecy, make
the engine simpler and use of less cpu and memory. Making a
program to convert TTF to a bitmap is very simple, as well as
making it accept arguments and add graphical effects.

Cheers,
Ricardo

Perhaps its my laptop hardware (PII, 300mhz no hardware accel
support), but I’ve found that blitting bitmap fonts for each frame
still gives me a noticeable slowdown.

I ended up storing a copy of the text in memory, and blit to the the
main display only when that part of the screen needs to be redrawn.
Much more friendly for the CPU at the cost of a little memory.