SDL_ttf 2.0.18 Surface to OpenGL texture not consistent with ttf 2.0.15

Hello all!

I recently updated my SDL versions from SDL 2.0.18 + SDL_ttf 2.0.15 to 2.0.20 and 2.0.18 respectively. I have code that builds a glyph map using TTF_RenderText_Blended for a monospaced font, then converts the surface to an OpenGL texture.

Glyph texture building code:

    const std::string characters = " ! #$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ    _ abcdefghijklmnopqrstuvwxyz | ";

    inline void initialize_fixed_function_text() {
        // Build Glyph texture
        SDL_Surface * LUTSurface = TTF_RenderText_Blended(Fonts::Get().GetFont(),
                                                          characters.c_str(),
                                                          {255, 255, 255, 255});
        surface_width = LUTSurface->w;
        surface_height = LUTSurface->h;
        make_texture(fast_text_texture, LUTSurface->pixels, LUTSurface->w, LUTSurface->h);
        TTF_SizeText(Fonts::Get().GetFont(), "A", &character_width, &character_height);
        character_width_percent = float(character_width) / float(LUTSurface->w);
    }

make_texture():

    void make_texture(Texture &texture, void *data, unsigned long width, unsigned long height) override {
        glGenTextures(1, &texture.gl_texture);
        glBindTexture(GL_TEXTURE_2D, texture.gl_texture);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, data);
        glGenerateMipmap(GL_TEXTURE_2D);
    }

I then build rects for each character dynamically by setting the vertex texture coordinate to a point in the glyph map.

Here is the difference between the two versions.

Screen Shot 2022-01-19 at 12.48.42 PM

However if I save the SDL_Surface to disk and reload with it works correctly again:

        SDL_SaveBMP(LUTSurface, "/Users/joe/Library/Preferences/MiniMeters/glyphs.bmp");
        LUTSurface = SDL_LoadBMP("/Users/joe/Library/Preferences/MiniMeters/glyphs.bmp");

My assumption is that the Surface format changed, but I could not find information on this in the changelog or commits.

Any help would be appreciated! Thank you!

The format hasn’t changed but the surface created by SDL_tff has a pitch greater than width.
(4 * surface->w < surface->pitch)
You should either tighly repack (inplace) the memory surface->pixels or somehow add a parameter to tell it to opengl

Sweet! Thank you for your help I had no clue what pitch was prior to this and now after some googling I understand.

For OpenGL I am doing:

glPixelStorei(GL_UNPACK_ROW_LENGTH, pitch / bytes_per_pixel);

and in my Metal implementation I am passing in the pitch to the bytesPerRow argument as such:

mtl_texture->replaceRegion(region, 0, data, pitch);

Just a note for anyone else who finds this thread: OpenGL ES 2 doesn’t have any API to set pitch / row length, although OpenGL ES 3 and desktop OpenGL do.

And it’s possible to re-pack the surface like this:

Sint32 i;
Uint32 len = surface->w * surface->format->BytesPerPixel;
Uint8 *src = surface->pixels;
Uint8 *dst = surface->pixels;
for (i = 0; i < surface->h; i++) {
    SDL_memmove(dst, src, len);
    dst += len; 
    src += surface->pitch;
}    
surface->pitch = len;
1 Like

can not compile that
a lot of warnings
and a segmentation fault

Uint8 *src = surface->pixels;

warning: invalid conversion from ‘void*’ to ‘Uint8*’

In C you can convert void* to other pointer types implicitly.

In C++ you need to use a cast.

Uint8* src = static_cast<Uint8*>(surface->pixels);

OK thanks! Now it’s working. But didn’t help.

The command above did the trick:
glPixelStorei(GL_UNPACK_ROW_LENGTH, surf->pitch / surf->format->BytesPerPixel);

It was about a game that I compiled with “make SDL=2”