SDL 3.0

I have been working on the GPU for a while.
Some of my structure/notes
This is just an outline… http://multiversesocial.com/codescraps/GPU.c
I want to make a homogenized GPU Channel. I was starting with NVIDIA & Areno.
Had this idea to allocate enough blocks for the desktop+ 256 more to flag/allocate as user threads…
SDL shader language looks interesting; any suggestions/thoughts appreciated.

It might be too late or considered unimportant, but it costs nothing to try.

Would it be possible to have either as the default or as an option (and if it’s an option, either at runtime or at build time, anything will do) something that would release all the allocated memory? In SDL_quit or in another dedicated function (like I said, anything will do) ?

Also, something that would make access to uninitialized memory. I know that what valgrind/drmemory etc consider uninitialized may not, in fact, be uninitialized, but it still generate a nasty log.

So far the only method I found to have a window without memory leak/access to uninitialized memory is X11 (xlib etc) and I suspect that it’s because the nasty stuff happen in a different process.

I know that those tools have procedures to suppress warnings, but it always felt unclean.

I also know that upon closing the resources are reclaimed by the operating system, which motivate my suggestion to have it as an option.

Also, a bit independently but not entirely, I wonder how SDL2 behaves on microcontrollers (esp32/pi pico/Arduino) and considering it is supposedly a “Simple Direct Media” layer it would make sense to have it usable on those “plateforms”. I suspect SDL2 is orders of magnitude too big (those environments tend to have a single digit of mb available, sometimes less, and kilobytes of ram).
I think there is nothing preventing sdl3 from having a clear separation between “what the library can do” and “how it’s done” that would allow someone to implement a minimalistic backend to sdl3 for those environments.

Overall, if it can work on machines as different as a desktop and a microcontroller, there is somehow the guarantee that it will work everywhere.

(Note: most of those microcontrollers don’t have enough ram to have a buffer to store the pixels to render and work by outputting one line at a time. Can sdl3 “understand” those constraints or will opengl+buffers always more or less assumed in the future ?)

Most microcontrollers are far too constrained to be a realistic target for SDL. The Pi Pico only has 264 kilobytes of RAM. The ATmega328P that runs most Arduinos only has 2 kilobytes RAM (and only 32k for program storage). ESP32-based systems also typically have under a half a megabyte of RAM

There isn’t even a standardized way to get video and audio out of them.

As to your first point, if you’re at the point in your program of calling SDL_Quit then what does it matter?

It matters because when used in a teaching context it’s kind of annoying to have those warnings. You try to teach people they should properly release the memory they allocate, that they shouldn’t read uninitialized memory etc and then you can’t get a single gui library (besides Xlib on linux) that will not teach them the opposite.

I am not a professional teacher.

Having a clean foundation would also allow people to develop clean layers on top of it. Maybe I am too naive or innocent, but I think that it’s a mindset to have. People have the idea that gui = poor memory management, poor security etc, and because of that they themselves start to slack in regards to those aspects which, in turn, makes the “prophecy self fulfilling”.

(And believe me: I tried SDL, but I also tried Qt, wxWidget and every alternatives I could find).

I don’t know where you get the idea that GUI = lazy/poor memory management.

Example code doesn’t tend to show cleanup or error checking in order to keep a high signal-noise ratio and focus on the thing being shown in the example. It doesn’t mean actual applications don’t do it.

It’s not really even possible or advisable for SDL to track all of an application’s memory allocations. What if custom memory allocators are being used? It’s pretty common for games to use custom memory allocation schemes. What about languages that don’t even have manual memory management? You don’t want SDL just making stuff disappear from underneath the language runtime.

As to SDL_Quit, my point still stands: if you’re waiting until application exit to clean up all your memory, having one extra “clean up everything” function is pointless. It’s up to the application to clean things up when it makes sense for the application to do so.

Anyway, let’s not derail the thread with this.

2 Likes

You misunderstood me. Entirely.

I only ask SDL to initialize the moment it uses and to deallocate the memory it allocates.

If I create a simple window with SDL and close it right after (with sdl_quit and everything I can imagine) I still get a report of roughly 50 mb of memory with valgrind.
(It has been a long time, so I am going with what I remember).
If I try to do some opengl on top of it, I get logs saying the program is accessing uninitialized memory.

I understand that most of those are likely false positive: a single-time allocation with malloc will trigger a warning if not freed, even if it’s not really a leak given it’s a single-time.
In the same way, I imagine communication with drivers can look like “passing a pointer to the driver so it stores informations there”. If valgrind fails to detect that another program (=the driver) wrote something at that location, it will report a false positive (saying the program is reading uninitialized memory - which is false).

Now, the fact that they are false positives doesn’t mean they aren’t there. And while it’s not the job of SDL to fix valgrind’s failures… Well it could help a bit.
Fixes for the two cases above are easy: free the allocated memory, even if it’s a single-time, and initialize memory (with zeroes) even if it’s going to be written to by the driver.

Without considering myself as an expert programmer (far from it) I think those should be doable. I can think of a case were the free wouldn’t be possible: if the driver requires the memory to remain available even after the program exited (I mean by that: the main function returned).
I don’t see a case where the driver would require the program to access what would be considered uninitialized memory.

Also, it may be tangent to this issue, but I find zig’s allocator idea interesting. I don’t know how hard it would be to have SDL taking an allocator and using it to allocate/free/etc memory.

I know what I ask is not “free”: it costs efforts (and a bit of CPU) to free the single-time allocations. It takes also efforts and quite a bit more CPU cycle to initialize memory. People may not want to loose cycles on that.

Which is why I ask it to be behind a compilation option and/or runtime options. Having the allocator system should solve at least partially those issues (the allocator could initialize the memory and free it at the end - I know doing so does not prevent leaks, but here we are talking about false positive, not true positive). It would only partially solve the issue, as stack allocation could remain uninitialized.

I am not asking SDL to do anything fancy/magic. Just to show a good example. Something I could use and say to my students “Ok, here are how you create a window/receive events/properly quit. As you can see, it doesn’t generate any warning/error when used with stock valgrind (=without any special configuration/warning suppress). Now that I gave you this clean base, I want you to do this and that. And I’ll test your program, and I want it to still have valgrind not complaining. I gave you something clean, keep it clean”.

(Allocator would also allow SDL to run on weird environments with segmented memory. And of course it would come with a default allocator built on top of malloc free realloc etc)

Are you calling SDL_DestroyRenderer() and then SDL_DestroyWindow() when done?

Both Valgrind on Linux and Apple’s leaks tool on macOS report no leaks with the following program:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <SDL2/SDL.h>

static void bailout(const char *title, const char *message)
{
	fprintf(stderr, "ERROR: %s: %s\n", title, message);
	SDL_Quit();
	exit(EXIT_FAILURE);
}

int main(int argc, char **argv)
{
	if(SDL_Init(SDL_INIT_VIDEO) != 0) {
		bailout("Can't init SDL", SDL_GetError());
	}

	SDL_Window *window = SDL_CreateWindow("Leak Test",
			SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
			800, 600,
			0);
	if(window == NULL) {
		bailout("Can't create window", SDL_GetError());
	}

	uint32_t renderFlags = SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC;
	SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, renderFlags);
	if(renderer == NULL) {
		bailout("Can't create renderer", SDL_GetError());
	}

	int running = 1;
	while(running) {
		SDL_Event event;
		while(SDL_PollEvent(&event)) {
			switch(event.type) {
			case SDL_QUIT:
				running = 0;
				break;
			}
		}

		SDL_SetRenderDrawColor(renderer, 0, 127, 255, 255);
		SDL_RenderClear(renderer);
		SDL_RenderPresent(renderer);
	}

	SDL_DestroyRenderer(renderer);
	SDL_DestroyWindow(window);
	SDL_Quit();

	return 0;
}

leaks:

leaks Report Version: 4.0, multi-line stacks
Process 7633: 24645 nodes malloced for 4561 KB
Process 7633: 0 leaks for 0 total leaked bytes.

Valgrind (process ID removed because it was messing up Discourse :man_shrugging:):

 LEAK SUMMARY:
    definitely lost: 0 bytes in 0 blocks
    indirectly lost: 0 bytes in 0 blocks
      possibly lost: 0 bytes in 0 blocks
    still reachable: 53,781 bytes in 947 blocks
         suppressed: 0 bytes in 0 blocks
 Reachable blocks (those to which a pointer was found) are not shown.

So… yeah.

This sounds like an issue with your GPU’s driver. Running the above program on my Raspberry Pi 4 (the only Linux system I have) gives no warnings about using uninitialized memory, even though the renderer is using OpenGL on the backend.

Both SDL2 and 3 already support setting custom allocators via SDL_SetMemoryFunctions()

This line could be avoided.

Also, a quick research on Google (I didn’t cherry picked, I literally picked all the first results):

EDIT: the site rightfully indicated that I am a new user and can post at most 2 links. Maybe it’s better like that. It indeed had a “spammy” look like this.
Well, the research was “SDL2 valgrind”.

… And I’ll just stop there. I could go on.

I am just asking that, in SDL3, it would be considered an issue to have those warnings/errors. Clearly it’s not the most important/urgent thing to do, and maybe there are cases where it’s simply not possible to avoid them. Valgrind is not perfect.
But afaik, they wouldn’t be the hardest things to fix either.

I will definitely give that custom allocator a go to see if it’s better. While I am at it I’ll install the latest SDL and SDL2 on my machine and post the result for a minimal application.

Thanks for your insights. Maybe this is the solution I was looking for.

Ok, I just installed SDL2 (sudo apt-get install libsdl2-dev), created a main containing only three lines:
SDL_Init( SDL_INIT_VIDEO );
SDL_Quit();
return 0;

Compiled with gcc (gcc main.c -lSDL2).
Launched with “valgrind./a.out”.

Results: 322,718 bytes still reachable allocated in 3491 blocks. 8 errors from 4 contexts.
It includes things like “invalid read of size 8 address is 9 bytes inside a block of 15 alloc’d”. The faulty function is strncmp which suggest someone made a calculation mistake here. As OSes can allocate a bit more than what you ask for, it’s possible to access memory beyond the requested size and get away with it. Still not something that should be there.

(But the allocation seems to be done by dlopen).

The log is ~9kb long. If I relaunch it with more verbose parameters: “valgrind --show-reachable=yes --leak-check=full ./a.out” it goes up to 2412kb.

I also tried an even simpler main:

SDL_Init( 0 );
SDL_Quit();
return 0;

This time no errors (great :+1:) but still 884 bytes reachable (in 20 blocks).

Notice: I am using Ubuntu with wsl, but I had similar issues in the past, both with other virtual machines (virtual box) and with non-virtual systems.

And I didn’t even stepped into actually opening a window.

(also I remember those numbers to be lower in the past)