With SDL 1.2, we did the normal(?) way of SDL_EnableUNICODE(1) and then for each character, we received via keyEvent->keysym.unicode, we would add it to our input buffer to make the unicode string.
With SDL 2.0, both those have been removed, and it mentions to use the new text input API.
However, I am not clear the exact reason to use said new text input API, instead of using the old method (which would require a workaround since those are gone).
Let’s say the t key is used for communications.
When we detect that the t key is pressed, we would then start SDL_StartTextInput(), and then in event loop, we would keep getting SDL_TEXTINPUT events as the user types, but since they could be also hitting other hotkeys (like Function keys or tab or…), wouldn’t all this be in the unicode string as well, so we would need a way to filter non ASCII (or non readable) keys from that right?
Then if they want to edit said chat string, (hit backspace or use arrow keys) then we get SDL_TextEditingEvent events on those, is any of that filtered out as well?
Then finally, when in SDL_StartTextInput() mode, we wait for them to hit the enter key, and then do SDL_StopTextInput(), which would give us the final unicode string correct?
This brings up SDL_SetTextInputRect(), just how does this play into things? It says: “Use this function to set the rectangle used to type Unicode text inputs.”, is this supposed to be the buffer limit of the text, so, if we wanted 80 chars max, we would set this to 80? Or is this for something else?
I just don’t see the need for this for games, the old way handled unicode input just fine, never had any bug reports, so, just why should we use the new way with all the extra processing required for the new logic & event loops? We still need to advance the cursor in the game for every character they type, so I guess I am just missing something with the new text API.
Maybe someone can help clear the fog about this?