Sometimes I hope to pass component whose value is in [0.0, 1.0] to SDL_SetRenderDrawColor. I believe OpenGL/GLSL programmers would agree with me. Converting from a real value in [0.0, 1.0] to [0, 256) is not only an additional work, but also prone to error that is uneasy to find for new SDL users (like me): 1.0 will be converted to 256 and it is further converted to 0, turning brightness to complete darkness because the parameter is defined to be Uint8. So, is there a version (e.g., overloaded version) of SDL_SetRenderDrawColor in SDL that supports [0.0, 1.0] component? Thanks a lot.
Surely 1.0 should be converted to 255, not 256? I would expect the conversion (rounded) to be:
i = int(255 * n + 0.5);
@rtrussell: The sentence you cited is an example of error I contrived in my original thread that is possible during the conversion from [0.0, 1.0] to 8 bit integer.
Your conversion is linear. But what is the meaning of the 8 bit integer passed to SDL_SetRenderDrawColor? Is it exactly the values stored in the display buffer? Unfortunately the document does not mention a word about it. I guess so. But if it is the case, these values are display-encoded RGB triplet (encoded to display correctly). Because [0.0, 1.0] usually represents linear values like those calculated in ray tracing or physically-based simulation, we have to use gamma correction to do the conversion (i.e., encoding). I know PC is using sRGB standard in which gamma is 2.2. Is it true for other gaming machines like XBox or PS4? I don’t know. The situation is further complicated by the possibility of 10 bit color supported by many graphics hardware nowadays (I happen to have one in use). I think a good API set should not leave all these details to users, so I was asking whether SDL contains a [0.0, 1.0] version of SDL_SetRenderDrawColor that can do the additional conversion work (preferably providing some robustness like clamping values greater than 1.0 to 1.0). That’s the main idea of my question.
SDL does not contain a linear version of that function.
I’m pretty certain that the values passed in are sRGB with an assumed gamma of 2.2. Even if other platforms technically may have a different gamma, it would be so similar you can’t even notice a difference.
The API is supposed to be easy for users to understand and pick colors. 0 is full black, 255 is full bright, and 127 looks half bright as expected.
When it comes to converting a [0.0, 1.0] floating point range to a discrete integer range, anyone who works with audio processing will tell you there is no “neat” way to do it.
If you must choose your colors from a linear color space, I would suggest doing a 2.2 gamma adjustment and then multiplying and rounding in some way involving 255 (not 256).
Or use OpenGL/Vulkan/DirectX if you must work in linear space all the time. The SDL Render API is meant to be pragmatic, and abstracts all the exciting low level details that you’re interested in away to make it easy for people who don’t need it for their simple blitting needs.