SDL_rand and SIGSEV

I wanted to try out SDL_qsort, so I tested the following mini-program.
The only problem is that the program triggers a SIGSEV about every other start attempt.
If it runs without SIGSEV, it sorts correctly.
The SIGSEV is always triggered with SDL_qsort.

What could be the reason for this?

#include <SDL3/SDL.h>

#define array_len 50

static int num_compare(const void *_a, const void *_b)
    int a = *((int *) _a);
    int b = *((int *) _b);
    return (a > b);

int main(int argc, char *argv[])
    int nums[array_len];
    for (int i = 0; i < array_len; i++) {
      nums[i] = SDL_rand();
      SDL_Log("%i", nums[i]);

    SDL_qsort(nums, array_len, sizeof (int), num_compare);

    for (int i = 0; i < array_len; i++) {
      SDL_Log("%i", nums[i]);
    return 0;

Your comparison function is supposed to return:

  • A negative value if a should come before b.
  • A positive value if a should come after b.
  • Zero if a and b are considered equal.

While Peter’s answer is sufficient to resolve the issue, I would like to expand on it: the reason is that you are breaking the order that the algorithm assumes, which causes memory corruption.

How in the world does a wrong ordering callback result cause memory corruption?!? The worst that that should be capable of causing is incorrect sorting results.

At the extreme case, you should be able to feed an RNG into your sorting algorithm as the callback, and get back a randomly-shuffled array. If “incorrect” results cause corruption, there is something wrong with the algorithm.

I believe because it causes stack buffer overflow in qsort_r_words. The code is kinda cryptic to understand though.

It reminds me of the 30 year old qsort_r glibc bug that was fixed in January with a similar problem :grin:

I’ve posted an issue and merge request that fixes this.
You can track it here: SDL_qsort stack buffer overflow with non-transitive comparison function · Issue #10055 · libsdl-org/SDL · GitHub

It’s a bug in Mathias’ code but I don’t think it’s necessarily a bug in SDL. I believe with standard qsort it’s UB if you pass an incorrectly implemented function to qsort. That said, getting a segfault is not nice (and potentially a security concern) so if it can be avoided easily without too much overhead then it’s probably the right thing to do but I wouldn’t go as far as Mason_Wheeler and assume it will give any sensible output with a “random” comparison. At the very least, the probability of each permutation would probably not be the same.

I wonder how well SDL_qsort() handles data with many duplicates.
Standard quicksort is really bad at it; the usual fix is to create three partitions in the partition step: one with elements strictly smaller than the pivot, one with elements equal to the pivot (this one is different!), one with elements strictly bigger than the pivot, see also Quicksort - Wikipedia

As far as I can tell from the (TBH a bit hard to read) SDL_qsort source code, it does standard quicksort partitioning, which will cause the algorithm to become really slow when there’s lots of duplicate elements.

Generally it’s a good idea to use the standard libc functions instead of SDLs replacements, unless you have a good reason to avoid the libc, like when writing a library that you want to distribute in binary form on Windows and you want to avoid depending on the CRT because on Windows almost every compiler (version) has its own CRT.

The libc functions should basically always be at least as fast as the SDL replacements, and usually faster because they tend to be much better optimized.

And here is a “proof” for standard qsort: Compiler Explorer

Code and sample output
// The following program sorts an array with 10 elements
// 10000 times and counts how often each value ends up as
// the first element. Note that the values are always in
// the same order before we call qsort.
#include <stdlib.h>
#include <stdio.h>
#include <time.h>

#define array_len 10
#define runs 10000

static int num_compare(const void *_a, const void *_b)
	return rand() % 3 - 1;

int main(int argc, char *argv[])

	int nums[array_len];
	int count_first[array_len] = {};
	for (int n = 0; n < runs; n++) {
		for (int i = 0; i < array_len; i++) {
			nums[i] = i;
		qsort(nums, array_len, sizeof(int), num_compare);
		/*for (int i = 0; i < array_len; i++) {
			printf("%d ", nums[i]);
	for (int i = 0; i < array_len; i++) {
		printf("%d was the first value %.1f%% of the time.\n", i, (100.0 * count_first[i]) / runs);
0 was the first value 29.5% of the time.
1 was the first value 15.3% of the time.
2 was the first value 14.5% of the time.
3 was the first value 4.9% of the time.
4 was the first value 2.6% of the time.
5 was the first value 14.9% of the time.
6 was the first value 6.9% of the time.
7 was the first value 7.6% of the time.
8 was the first value 2.5% of the time.
9 was the first value 1.3% of the time.

I haven’t tested this with SDL_qsort but I would not expect it to work much better.

The way you implement the random comparison function also matters. In the above code I just returned -1, 0 and 1 with equal probability. I guess it would have been more fair to only return -1 and 1, each with 50% probability, which seems to help but the result is still not perfect.

If you want to shuffle, just use a proper shuffling algorithm.