SDL_LogMessage Vs printf - float number difference

When I output a number from a floating point division to the console I get different results using SDL_LogMessage than when using printf. Anyone know why?

Dividing 880 by 256, printf displays 3.437500, but SDL_LogMessage says 3.437499

#include <stdio.h>
#include “SDL2/SDL.h”

int main(int argc, char *argv[])
float a = 880;
float b = 256;
float c = a / b;

SDL_LogMessage(SDL_LOG_CATEGORY_APPLICATION, SDL_LOG_PRIORITY_INFO, "\n\n   SDL_LogMessage says:      %f\n",c);
printf("   printf says:              %f\n",c);

return 0;


My guess is one is using a 32 bit floating point number and the other is using a 64 bit one? What does printf-ing sizeof(c) give you?

I think the reason is that libc’s (CRT’s) printf() takes care to round correctly etc, while SDL’s float formatting (used in its own printf-like functions) is more primitive

I initially guessed that too, but the OP’s sum (880 / 256) has a precise, short, value in both binary and decimal so no rounding should be needed.

Look at the code: SDL/SDL_string.c at main · libsdl-org/SDL · GitHub

A simplified version of that which still does the same calculation for the decimals looks is this:

void printFloatCheap(double arg, int precision)
	unsigned long value = arg;
	printf("%lu.", value); // print integer part
	arg -= value; // only decimals left
	int mult = 10;
	while (precision-- > 0) {
		value = (unsigned long) (arg * mult);
		// add underscore to make more visible how many decimals
		// were calculated in each step
		printf("_%lu", value);

/* // SDL_PrintFloat() does this as it outputs into char* text, but here
   // we just printf() things directly, so commented out
		if (len >= left) {
			text += (left > 1) ? left - 1 : 0;
			left = SDL_min(left, 1);
		} else {
			text += len;
			left -= len;
		arg -= (double) value / mult;
		mult *= 10;

and it indeed (with precision=6 which is the default) prints 3._4_3_7_4_9_9

probably the imprecision comes from arg -= (double) value / mult;

FWIW, the following code works better for this number:

void printFloatCheap2(double arg, int precision)
	unsigned long value = arg;
	printf("%lu.", value);
	arg -= value; // only decimals left
	// mult = 10 ^ precision
	int mult = 1;
	for(int i=0; i<precision; ++i) {
		mult *= 10;
	arg *= mult;
	printf("%lu", (unsigned long)arg);

but it might not work as well in other cases and I can imagine that the original code is like it is because it makes sure in every step not to overflow the buffer

The sizeof c is 4 bytes.

I think I should probably stick with printf for more accurate results on the fly.

Fair enough. TBH I wouldn’t have expected a logging function to be able to print anything except integers anyway, so even inaccurate floats are a bonus!

This seems a more straightforward way to do it. It gives the correct answer for (but isn’t specific to) the OP’s value, and it works with negative numbers which the original doesn’t:

void printFloatCheap(double arg, int precision)
	long value = arg;
	printf("%ld.", value); // print integer part
	arg -= value; // only decimals left
	if (arg < 0) arg = -arg; // abs(arg)
	while (precision-- > 0) {
		arg *= 10.0;
		value = arg;
		printf("%ld", value);
		arg -= value;

Incidentally I’m not enthusiastic about using ‘long’ as a type because it’s 32-bit on some systems and 64-bits on others.