High output (and/or input) delay of SDL2 compared to non-SDL (reaction time about 75% longer (4 frames)) (CSGO and test project), a bug?

Thesis in short: If you build an app which measures your response time to a visual or auditive stimuli the reaction time is way longer if the app was build with SDL features compared to non-SDL implementation (independent of frames per second).

proof:
1.) AAA-game Counter Strike Global Offensive (shooter) compared Linux implementation (using SDL) with Windows (non-SDL (afaik)).
2.)Own test project

1.)CSGO LinuxSDL vs Windows
I recently discovered an aim training map which measures your reaction time.
(Yprac Aim Arena)
There a red screen turns into green and you need to press the left mouse button as fast as possible.
In windows my mean reaction time is about 200ms. At linux this game is using a SDL implementation. There my reaction time is about 260ms. (a video about this map)
60ms don’t sound much but it has a huge impact in game play. With my 60Hz display this is about 4 frames delay compared to windows non-SDL implementation.
A pro gamer (in video above) has 169ms @60Hz. According to some post I read somewhere in internet the fastest response time for visual stimuli is 120ms for humans (physical limitations).
If you remove this lower border to get possible response times it is 80ms at Windows and 140ms at Linux with SDL or 75% slower with SDL . For the pro gamer it would be 49ms vs 109ms or 122% slower. That means if he plays at Linux and I at Windows I would win (if only reaction time matters, just to show the impact of those 60ms).
(tested multiple runs with same settings and hardware (~200+fps))

2.) LinuxSDl vs WindowsSDL
For further testing I made some small program which uses SDL to test the same thing (shows red/green image a stops reaction time). At linux I got about the same 260ms again, for windows 238ms. So about 40ms slower than in game. I added some auditive stimuli with SDL_mixer:
Mix_PlayChannel(-1, sound, 0);
Nothing changed, still around 240ms reaction time at windows. For tests I exchanged it with the windows only function (windows.h)
PlaySound(TEXT("sound.wav"), NULL, SND_FILENAME | SND_ASYNC);
With this I got a mean reaction time of 200ms again. Same as at the in game test from 1.).

Conclusion: The SDL output (picture and sound) is delayed by 40-60ms or by 2-4 frames or 75% slower.

(Linux Mint 18.3 and Windows 7, 60Hz screen, nvidia GPU, intel CPU were used)


Test code
Here is my crappy copy&paste code (sry only did SDL1 long time ago).
Maybe I did done some crucial mistakes in coding which results in this delay instead of SDL. But this would still not explain the big difference in CSGO.

  //clang -std=c++14 ./react.cpp  -lSDL2 -lstdc++ 

//only for windows (for playSound)
#ifdef _WIN32 
    #include "stdafx.h"
    #include <windows.h>
#endif

#include <SDL2/SDL.h>
#include <stdio.h>
#include <ctime>
#include <iostream>
#include <chrono>


void runGame(SDL_Window* window) {
		

    SDL_AudioSpec wav_spec;
    Uint32 wav_length;
    Uint8 *wav_buffer;

    SDL_AudioSpec desired;
    SDL_AudioSpec obtained;

    SDL_zero(desired);
    desired.freq = 44100;
    desired.format = AUDIO_S16;
    desired.channels = 2;
    desired.samples = 4096;
    desired.callback = NULL;

    SDL_AudioDeviceID deviceId = 0;
    if (SDL_LoadWAV("sound.wav", &wav_spec, &wav_buffer, &wav_length))
    {
        deviceId = SDL_OpenAudioDevice(NULL, 0, &desired, &obtained, 0);
        if (deviceId)
        {
            SDL_PauseAudioDevice(deviceId, 0);
            int success = SDL_QueueAudio(deviceId, wav_buffer, wav_length);
            if (success < 0)
                SDL_ShowSimpleMessageBox(0, "Error", "Failed to queue audio", NULL);
        }
        else
            SDL_ShowSimpleMessageBox(0, "Error", "Audio driver failed to initialize", NULL);
    }
    else
        SDL_ShowSimpleMessageBox(0, "Error", "wav failed to load", NULL);
 

	std::srand(std::time(nullptr));
	unsigned long long int delay, delaySum;	
	std::chrono::steady_clock::time_point start;
	std::chrono::steady_clock::time_point end;
	SDL_Surface *screenSurface = SDL_GetWindowSurface(window);

	SDL_Event event;	
	SDL_DisplayMode DM;
	SDL_GetCurrentDisplayMode(0, &DM);
	auto Width = DM.w;
	auto Height = DM.h;
	SDL_Rect rect;
	rect.x = (int)(Width / 2 - Width * 0.05);
	rect.y = (int)(Height / 2 - Height * 0.05);
	rect.w = (int)(Width * 0.1);
	rect.h = (int)(Height * 0.1);

	int counter = 0;
	delaySum = 0;
	int soundMode = 0;

	while (1) {
		start = std::chrono::steady_clock::now();
		end = std::chrono::steady_clock::now();

		//windows int did only up to 32k oO
		//-> negative results possible -> uns long long int
		delay = (( ((unsigned long long int)std::rand()) * 641234+((unsigned long long int)std::rand())*32000+ std::rand()) % 4000000) + 2000000;
		
		
		SDL_FillRect(screenSurface, NULL, SDL_MapRGB(screenSurface->format, 0xAA, 0x11, 0x11));
		SDL_FillRect(screenSurface, &rect, SDL_MapRGB(screenSurface->format, 0, 0, 0));
		
		SDL_UpdateWindowSurface(window);
		int soundModeBF = soundMode;
		while ((unsigned long long int)std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()<delay) {
			SDL_Delay(1);
			end = std::chrono::steady_clock::now();

			SDL_PollEvent(&event);

			if (event.type == SDL_MOUSEBUTTONDOWN && event.button.button == SDL_BUTTON_RIGHT) { //right mouse button to reset counter
				counter = 0;
				delaySum = 0;
			}

			if (event.key.keysym.sym == SDLK_ESCAPE) //exit with Esc
				return;

			if (event.type == SDL_KEYDOWN && event.key.keysym.sym == SDLK_s && soundModeBF== soundMode) { //s to toggle sound on/off
				soundMode = (soundMode+1) % 2;
				counter = 0;
				delaySum = 0;				
			}
		}

		SDL_FillRect(screenSurface, NULL, SDL_MapRGB(screenSurface->format, 0x11, 0xAA, 0x11));			
		SDL_FillRect(screenSurface, &rect, SDL_MapRGB(screenSurface->format, 0, 0, 0));		
		SDL_UpdateWindowSurface(window);
		
		if(soundMode == 1){            
			/SDL_QueueAudio(deviceId, wav_buffer, wav_length);
			//PlaySound((LPCTSTR)SND_ALIAS_SYSTEMQUESTION, NULL, SND_ALIAS_ID | SND_ASYNC);		//only for windows	
			//PlaySound(TEXT("sound.wav"), NULL, SND_FILENAME | SND_ASYNC);
		}		
		//else if (soundMode==2) only one of those above work at same time, fix?
			
		
		start = std::chrono::steady_clock::now();
		bool pressed = false;

		bool exit = false;
		soundModeBF = soundMode;
		while (!pressed && !exit) {
			
			SDL_Delay(1);
			while (SDL_PollEvent(&event)) {
				if (event.type == SDL_KEYDOWN && event.key.keysym.sym == SDLK_s && soundModeBF == soundMode) {
					soundMode = (soundMode + 1) % 2;
					counter = 0;
					delaySum = 0;
				}
				if (event.type == SDL_MOUSEBUTTONDOWN && event.button.button == SDL_BUTTON_LEFT) {
					pressed = true;		
					break;
				}
				if (event.type == SDL_MOUSEBUTTONDOWN && event.button.button == SDL_BUTTON_RIGHT) {
					counter = 0;
					delaySum = 0;
					break;
				}
				if (event.key.keysym.sym == SDLK_ESCAPE) {
					exit = true;
					break;
				}
			}		
		}

		if (exit)
			break;

		end = std::chrono::steady_clock::now();

		delay = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
		counter++;
		delaySum += delay;
		std::cout << "delay "
			<< (delay / 1000)
			<< "ms. mean "<<(delaySum/(1000*counter)) << " sound "<< soundMode << " trials " << counter << " \n";

	}


    SDL_CloseAudioDevice(deviceId);
}

int main(int argc, char* args[])
{
	
	SDL_Window* window = NULL;
	SDL_Surface* screenSurface = NULL;

	if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO) < 0) {
		fprintf(stderr, "could not initialize sdl2: %s\n", SDL_GetError());
		return 1;
	}
	
	SDL_DisplayMode DM;
	SDL_GetCurrentDisplayMode(0, &DM);
	auto Width = DM.w;
	auto Height = DM.h;	

	window = SDL_CreateWindow(
		"reaction test",
		SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
		Width, Height,
		SDL_WINDOW_SHOWN
	);

	if (window == NULL) {
		fprintf(stderr, "could not create window: %s\n", SDL_GetError());
		return 1;
	}	

	SDL_SetWindowFullscreen(window, SDL_WINDOW_FULLSCREEN);
	screenSurface = SDL_GetWindowSurface(window);

	runGame(window);

	SDL_DestroyWindow(window);
	SDL_Quit();
	return 0;
}

Is this a known issue?
Any way to fix it?
What are your thoughts about this topic?
What are your results in those tests? It’s only me who has those delays?

You refer throughout to ‘SDL vs non-SDL’ but can you be sure SDL is the issue here? For example it’s possible that the Windows program you are comparing with doesn’t use a ‘3D’ backend at all (Direct3D or OpenGL) but instead old-fashioned Windows GDI. If that’s the case SDL can’t necessarily be blamed, but rather it may be an inherent delay in the 3D renderer it uses.

As far as I know It runs with DirectX/Direct3D at windows and OpenGL+SDL2 at linux.

And how would you explain my example program using SDL_mixer sound and PlaySound? If not the image at least the sound is delayed. But the SDL_mixer sound leads to the same response time as without sound. That means the visual output is delayed as well. (or human auditive response time is much faster than visual)


Out of curiosity I did some test with my sample program only using sound at windows and linux (closed eyes). At windows I get a response time of 200ms again if I use the PlaySound function. If I use the SDL_Mixer mixPlay function I get a response time of about 290ms. So it is even worse than the visual delay. For comparison I started Linux again and got a stunning 550ms response time for auditive stimuli.

Update: with SDL_PauseAudioDevice instead of sdl_mixxer it is ‘only’ 350ms

@react I think you need to narrow it down a bit more, because it’s not entirely clear what you are measuring, and there are too many variables.

If you think that from the point the window update call until the picture appears on the screen there is a 30ms delay, then try to create a minimal test case that would measure that exactly, maybe by recording the screen, with software or camera. Same about the audio, except instead of the SDL_Mixer you should probably use the audio callback API to narrow it down further.

That would make the issue easy to reproduce and understand for people who might want to look into it.

I don’t know why SDL would have higher latency on Windows than non-SDL games (unless maybe the non-SDL game disables the display compositor and the SDL game doesn’t?).
But for Linux vs Windows a compositor (as used by most of the desktop environments) could definitely cause some additional latency.
Not sure what desktop your distro uses by default and if it allows to disable the compositor, but if it’s possible that’s sure worth a try.

I think it’s well established that if you want the lowest latency sound, it’s better not to use SDL_mixer but rather the core audio functions: SDL_OpenAudioDevice(), SDL_QueueAudio() etc. You should also avoid outputting an audio format not supported by the hardware, thus forcing a conversion that can add to the latency.

Did some update in source code changing mixxer to OpenAudioDevice (didn’t knew about the existence of this). With this the mean reaction time at linux sound is reduced to 350ms from 550ms (mixxer).
Using OpenAudioDevice at windows I get 250ms and with PlaySound still 200ms.

@namark It is about 60ms delay. I measure the time I need to press the left mouse button after the picture changed from red to green or as 2nd test after I notified a sound is playing.
How should I capture this with camera or software? E.g. If I change the screen color each second the delay will be 1s at video as well. To measure the delay for a new picture update I need an initial starting point. If I start the recording right after the update command I will get a delay from this camera call as well.

@rtrussell my hardware is equal at windows and linux. Or did you mean software/OS?

I think the “proper” way to measure delay is to have a high-speed camera and film both your display and some kind of button (that ideally doesn’t need to be pressed far until it activates) so you can see at what timestamp (or frame) in the video the button is pressed and at what timestamp the visual change appears on the screen

By the way, regarding sound delay, on Linux pulseaudio (which is used by default on the major distros nowadays) will introduce some additional delay. Not sure how much, I read that you have 100-200ms delay with pulseaudio, but that includes the delay you’ll get anyway from the sound driver etc.

https://juho.tykkala.fi/Pulseaudio-and-latency claims that they could reduce the delay from 100ms to 19ms by changing pulseaudio settings

@react Yeah it’s not easy to come up with a good way to test this.

The most basic test would be to make changes in smaller interval than the delay is, change color at 60fps for example. If you see frame skips or slow updates in the recording than you caught the bug.
Otherwise it could be that the changes are played back perfectly just with 60ms shift in time. For this you can maybe use the system clock to sync the SDL app with another control app that would also be scheduled by the system clock (maybe a simple console app? alarm clock?), and record them side by side. Probably can’t sync them perfectly but should be good enough to detect 60ms.

Now that I said it, I’ll have to try these myself, but might take me a while.

I meant hardware and driver. If the audio stream you send to (e.g.) SDL_QueueAudio() is not in a format that the native driver can handle (perhaps you are outputting 44.1 kHz sampling but it only accepts 48 kHz sampling) SDL may do an automatic conversion for your convenience. But any such conversion is likely to add latency. When calling SDL_OpenAudioDevice() you can specify whether conversion should be permitted.

@Daniel_Gibson Interesting. My results are about 100ms slower compared to Windows. That would fit with your prediction. Did some test it shows latency 0 for me. Changing it to other values didn’t worked to far, generated new sink but no sound received so far. Will try again.

@namark Console output seems to have also delay. Did some test program for this as well. Their also about 250-260ms

//clang -std=c++14 ./reactConsole.cpp -lstdc++ -O3

//#include "stdafx.h"
#include <iostream>
#include <chrono>
#include <ctime>
#include <thread>
int main()
{
	std::srand(std::time(nullptr));
	unsigned long long int delay;	

	std::chrono::steady_clock::time_point start;
	std::chrono::steady_clock::time_point end;
	
	while (1) {
		start = std::chrono::steady_clock::now();
		end = std::chrono::steady_clock::now();
		delay = ((((unsigned long long int)std::rand()) * 641234 + ((unsigned long long int)std::rand()) * 32000 + std::rand()) % 4000000) + 2000000;

		while ((unsigned long long int)std::chrono::duration_cast<std::chrono::microseconds>(end - start).count() < delay) {
                        std::this_thread::sleep_for(std::chrono::milliseconds(1));
			end = std::chrono::steady_clock::now();            
		}
		std::cout << "####################### press return key ########################### " << "\n";

		start = std::chrono::steady_clock::now();
		char eingabe = std::getchar();
		if (eingabe == 20)
			break;

		end = std::chrono::steady_clock::now();
		delay = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();

		std::cout << "delay "
			<< (delay / 1000)
			<< " \n";
	}
    return 0;
}

@ rtrussell I had 48000 in code but my sound file is only 44100 also my sound card (pactl list short sinks) supports only up to 44100. Changed those 48000 in code to 44100 but no improvement in response time, still about 350ms. But ty for pointing out. Found a source code error with this.

@react That console program has similar issues, it’s unclear what is measured (stream input? output? buffering?) and you also introduce a human element that makes it hard to reproduce.

What I meant is something like this:

#include <chrono>
#include <string>

using namespace std;
using namespace chrono;

int main(int argc, char const* argv[])
{
	if(argc < 2)
		return -1;

	system_clock::time_point target(seconds(stoi(argv[1])));

	while(system_clock::now() < target);

	return 0;
}

It accepts the target time point in seconds and waits until that point.

Example of a test in console would be (using GNU coreutils to get the desired timepoint in seconds and pass it through):

date --date=6:02:30 +%s | xargs ./ac

And you can record that alongside something like:

watch -n 0.016 date +%T%3N

That should show system time including milliseconds.

If that simple program, and the watch, and the equivalent SDL test that changes color at the same time point all appear to do it on the same frame (or neighboring frames) in the video, then there are 2 possibilities

  • there is no big delay, and you were measuring something else (maybe input lag; needs a different test, which should be much easier to devise knowing that there is no delay in graphics)
  • they are all delayed by exactly the same amount, which suggest it’s a more global issue not just SDL.

Otherwise you should see the roughly 4 frames of difference. If you don’t trust the system console, I guess will have to implement an equivalent application using native graphics API :confused:

That said I think the simple framerate test I suggested at the beginning is enough to prove that there is no delay in graphics, since it must either drop frames or somehow somewhere someone implemented some sort of quadruple buffering, which is unlikely.

@namark watch only accepts seconds for me (in man page min value 0.1). This would also include the io stream output delay.
I can change color with 60fps or 100, 1000, 30, 10. All looks fine. Only have some tearing. But I did 1/fps wait time after draw. Will check again with true 60fps.

It seems to be it is not (only) the output delay. If I change the mouse button to a keyboard button (space) my mean reaction time decreases.

Did some similar program with glfw. There I get times below 200ms at linux if using keyboard with mouse around 240ms.
some code:

//#g++ -Wl,-Rlib -Iinclude -Llib reactGFLW.cpp -o test -lglfw -std=c++14 -lGL -lGLU

#include <GLFW/glfw3.h>
#include <stdlib.h>
#include <stdio.h>
#include <chrono>
#include <ctime>
#include <iostream>

static void error_callback(int error, const char* description)
{
    fputs(description, stderr);
}
int press = 0;

static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
   if (key == GLFW_KEY_SPACE && action == GLFW_PRESS && press==1)
        press = 2;
}
static void mouse_button_callback(GLFWwindow* window, int button, int action, int mods)
{
    if (glfwGetMouseButton(window, GLFW_MOUSE_BUTTON_LEFT) == GLFW_PRESS && press==1)
        press = 2;
}

int main(void)
{
    GLFWwindow* window;
    glfwSetErrorCallback(error_callback);
    if (!glfwInit())
        exit(EXIT_FAILURE);

    GLFWmonitor* monitor = glfwGetPrimaryMonitor();
    const GLFWvidmode* mode = glfwGetVideoMode(monitor);
    glfwWindowHint(GLFW_RED_BITS, mode->redBits);
    glfwWindowHint(GLFW_GREEN_BITS, mode->greenBits);
    glfwWindowHint(GLFW_BLUE_BITS, mode->blueBits);
    glfwWindowHint(GLFW_REFRESH_RATE, mode->refreshRate);

    //window = glfwCreateWindow(mode->width, mode->height, "My Title", monitor, NULL);
    window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);

    //const GLFWvidmode* mode = glfwGetVideoMode(monitor);
    //glfwSetWindowMonitor(window, monitor, 0, 0, mode->width, mode->height, mode->refreshRate);//needs glfw 3.2

    if (!window)
    {
        glfwTerminate();
        exit(EXIT_FAILURE);
    }
    glfwMakeContextCurrent(window);
    glfwSetKeyCallback(window, key_callback);
    glfwSetMouseButtonCallback(window, mouse_button_callback);


	std::chrono::steady_clock::time_point start;
	std::chrono::steady_clock::time_point end;

    std::srand(std::time(nullptr));
	unsigned long long int delay;
    press = 3;
    
    int counter = 0;
    unsigned long long int  delaySum = 0;
    while (!glfwWindowShouldClose(window))
    {
        if (press==3){
            delay = (( ((unsigned long long int)std::rand()) * 641234+((unsigned long long int)std::rand())*32000+ std::rand()) % 4000000) + 2000000;
            start = std::chrono::steady_clock::now();
	        end = std::chrono::steady_clock::now();
            press = 0;
        }

        float ratio;
        int width, height;
        glfwGetFramebufferSize(window, &width, &height);
        ratio = width / (float) height;

        glViewport(0, 0, width, height);
        glClear(GL_COLOR_BUFFER_BIT);
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        glRotatef((float) (press)*100, 0.f, 0.f, 1.f);
        glBegin(GL_TRIANGLES);
        glColor3f(1.f, 0.f, 0.f);
        glVertex3f(-0.6f, -0.4f, 0.f);
        glColor3f(0.f, 1.f, 0.f);
        glVertex3f(0.6f, -0.4f, 0.f);
        glColor3f(0.f, 0.f, 1.f);
        glVertex3f(0.f, 0.6f, 0.f);
        glEnd();
        glfwSwapBuffers(window);

        end = std::chrono::steady_clock::now();
        if ((unsigned long long int)std::chrono::duration_cast<std::chrono::microseconds>(end - start).count()>=delay && press==0){
            start = std::chrono::steady_clock::now();
            press = 1;
        }
        if (press==2){
            end = std::chrono::steady_clock::now(); 
            delay = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
		    counter++;
		    delaySum += delay;
		    std::cout << "delay "
			<< (delay / 1000)
			<< "ms. mean "<<(delaySum/(1000*counter))  << " trials " << counter << " \n"; 
            press = 3;        
        }

        glfwPollEvents();
    }
    glfwDestroyWindow(window);
    glfwTerminate();
    exit(EXIT_SUCCESS);
}

Maybe can run them both at same time and change the color related to system clock. (will update later on)

Update: SDL and glfw update at same frame! Recorded with a screen recorder software. So it is input lag?
test code:

//clang -std=c++14 ./sdl.cpp -o sdl  -lSDL2 -lstdc++

//only for windows (for playSound)
#ifdef _WIN32 
    #include "stdafx.h"
    #include <windows.h>
#endif

#include <SDL2/SDL.h>
#include <stdio.h>
#include <ctime>
#include <iostream>
#include <chrono>


void runGame(SDL_Window* window) {
		

	unsigned long long int delay, delaySum;	
	std::chrono::steady_clock::time_point start;
	std::chrono::steady_clock::time_point end;
	SDL_Surface *screenSurface = SDL_GetWindowSurface(window);

	SDL_Event event;	
	SDL_DisplayMode DM;

	int col = 0;
    std::time_t timeOld = 0;

	while (1) {		
		SDL_PollEvent(&event);
		if (event.key.keysym.sym == SDLK_ESCAPE) //exit with Esc
			return;

        std::time_t time = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
        if(time!=timeOld){
            timeOld = time;
            col = (col+1)%2;
        }

        if (col ==1)		
        SDL_FillRect(screenSurface, NULL, SDL_MapRGB(screenSurface->format, 0x11, 0xAA, 0x11));	
        else
        SDL_FillRect(screenSurface, NULL, SDL_MapRGB(screenSurface->format, 0xAA, 0x11, 0x11));			
		SDL_UpdateWindowSurface(window);
	}

}

int main(int argc, char* args[])
{
	
	SDL_Window* window = NULL;
	SDL_Surface* screenSurface = NULL;

	if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO) < 0) {
		fprintf(stderr, "could not initialize sdl2: %s\n", SDL_GetError());
		return 1;
	}
	
	SDL_DisplayMode DM;
	SDL_GetCurrentDisplayMode(0, &DM);
	//auto Width = DM.w;
	//auto Height = DM.h;	
    auto Width = 640;
    auto Height = 480;

	window = SDL_CreateWindow(
		"delay",
		SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
		Width, Height,
		SDL_WINDOW_SHOWN
	);

	if (window == NULL) {
		fprintf(stderr, "could not create window: %s\n", SDL_GetError());
		return 1;
	}	

	screenSurface = SDL_GetWindowSurface(window);

	runGame(window);

	SDL_DestroyWindow(window);
	SDL_Quit();
    return 0;
}
-----------------------------gfwl-------------------------
//g++ -Wl,-Rlib -Iinclude -Llib sdl.cpp -o sdl -lglfw -std=c++14 -lGL -lGLU

#include <GLFW/glfw3.h>
#include <stdlib.h>
#include <stdio.h>
#include <chrono>
#include <ctime>
#include <iostream>

static void error_callback(int error, const char* description)
{
    fputs(description, stderr);
}

static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods)
{
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
}


int main(void)
{
    GLFWwindow* window;
    glfwSetErrorCallback(error_callback);
    if (!glfwInit())
        exit(EXIT_FAILURE);

    GLFWmonitor* monitor = glfwGetPrimaryMonitor();
    const GLFWvidmode* mode = glfwGetVideoMode(monitor);
    glfwWindowHint(GLFW_RED_BITS, mode->redBits);
    glfwWindowHint(GLFW_GREEN_BITS, mode->greenBits);
    glfwWindowHint(GLFW_BLUE_BITS, mode->blueBits);
    glfwWindowHint(GLFW_REFRESH_RATE, mode->refreshRate);

    window = glfwCreateWindow(640, 480, "Simple example", NULL, NULL);

    if (!window)
    {
        glfwTerminate();
        exit(EXIT_FAILURE);
    }
    glfwMakeContextCurrent(window);
    glfwSetKeyCallback(window, key_callback);

    std::time_t timeOld = 0;
    int rot = 0;
    while (!glfwWindowShouldClose(window))
    {
        float ratio;
        int width, height;
        glfwGetFramebufferSize(window, &width, &height);
        ratio = width / (float) height;

        glViewport(0, 0, width, height);
        glClear(GL_COLOR_BUFFER_BIT);
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        
        std::time_t time = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
        if(time!=timeOld){
            timeOld = time;
            rot++;
        }
        glRotatef((float) (rot)*100, 0.f, 0.f, 1.f);
        glBegin(GL_TRIANGLES);
        glColor3f(1.f, 0.f, 0.f);
        glVertex3f(-0.6f, -0.4f, 0.f);
        glColor3f(0.f, 1.f, 0.f);
        glVertex3f(0.6f, -0.4f, 0.f);
        glColor3f(0.f, 0.f, 1.f);
        glVertex3f(0.f, 0.6f, 0.f);
        glEnd();
        glfwSwapBuffers(window);


        glfwPollEvents();
    }
    glfwDestroyWindow(window);
    glfwTerminate();
    exit(EXIT_SUCCESS);
}

@react I only skimmed through your code for input handling so there might be something obvious I’m missing, but got a few random guesses for the input lag:
The SDL_Delay(1) in your event loop(the top one) seems a bit dodgy, since the event queue is common among all the different types of events. Theoretically there can be a bunch of mouse motion events queued up and you would be delaying for at least 1ms for each.
Maybe try a standard polling loop(similar to the second one you have there), and if it still delays try using the timestamp member of the event for your calculations (theoretically a physics engine of the game should be able to do that too).
As a last resort you can try SDL_AddEventWatch/Filter API, it is supposed to be faster for handling some important events.

@namark made the delay to avoid 10^6 updates per second. It loops (and waits) only until the random wait time is over. So even if there is a long queue it should have no impact.
Also tested without the delay(1). Had no impact (only -1ms for 2nd loop).

Using event.button.timestamp and SDL_GetTicks() (after SDL_UpdateWindowSurface) I get the same times as with chrono (in ms). Same with SDL_AddEventWatch.

I found the issue! with help of a game developer (from that game).
For those who are interested in this topic the reason for this delay is an active middle mouse button emulation. It leads to a delay of 50ms.


No idea why someone set this as default value.