**
Hello,
thanks for all the replies.
I’ll take a look at the freetype-only example soon.
Sam Lantinga wrote:
The UTF8 functions are intended to support the full range of Unicode
characters.
but they don’t work currently because the UTF8* functions are mapped to
UNICODE* function calls
which only take 16-bit code points. Am I correct here?
As far as I see from the current clone of the repository, all calls are
mapped instead to TTF_RenderUTF8*()/TTF_SizeUTF8()
Where they first get converted to UTF8, and then cut to 16 bits by:
SDL_ttf.c:1226 Uint16 c = UTF8_getch(&text, &textlen);
SDL_ttf.c:1407 Uint16 c = UTF8_getch(&text, &textlen);
SDL_ttf.c:1589 Uint16 c = UTF8_getch(&text, &textlen);
SDL_ttf.c:1759 Uint16 c = UTF8_getch(&text, &textlen);
SDL_ttf.c:2027 Uint16 c = UTF8_getch(&text, &textlen);
while in fact UTF8_getch() returns Uint32.
Question remains if its UTF8_getch()'s decoding is correct for stuff above
0xffffu.
Since it’s a draw between changing SDL_ttf’s internal representation from
UTF-8 to UCS-4, and the feeling that the “common case” is usually ASCII,
just changing those Uint16-s to Uint32-s (there, and wherever this ‘c’ gets
passed to) should more-or-less fix the issue of rendering separate glyphs.
The restriction to 16-bit happens in:
Code:
static Uint16 *UTF8_to_UNICODE(Uint16 *unicode, const char *utf8, int
len)
What UTF8_to_UNICODE does is throw away the upper bits (code point = code
point & 0xffff).
This looks wrong. It maps for example code point 0x12000 to code point
0x2000.
Mapping them to 0xfffd seems more appropriate
Can’t find any mention of that in the latest source code. That would be:
http://hg.libsdl.org/SDL_ttf/
changeset: 248:95fabf442c03
tag: tip
user: Sam Lantinga
date: Fri Jul 05 21:30:01 2013 -0700
summary: Fixed bug: SDL_ttf incorrectly identifies some utf8 characters
as “overlong”.On Wed, Jul 10, 2013 at 12:26 PM, mkiever wrote:
–
./lxnt