SDL2_gfx extensions

nice!

I made some computations to understand this alpha thing. The formula I used e1/(e1+e2) turns out to be a very good approximation of the horizontal distance (not squared) between the pixel and the ideal circle. (and e is not necessarily small, it’s an integer between -2r and 2r, where r is the radius).

Now comes the physics. My guess would be that the alpha value is (linearly?) related to the “intensity” of the pixel (I should rather say the “value”). But then the energy is the square of the intensity. My second guess is that the eye rather perceives the energy, and not the intensity.

Now look at the simple situation with an ideal pixel at position (float)x in [0,1]. We want to approximate it by two physical pixels at x=0 and x=1. The question is how to compute the intensity of each pixel, i1 and i2. [which is the same as the alpha value, if my first guess is correct]. The surface proportions of the ideal pixel are p1 = 1-x and p2 = x. Therefore we should take

i1= sqrt(1-x) and i2 = sqrt(x),

so that i1^2 + i2^2 = 1. So you are right, we need the square root (or a trick to approximate it).

BUT, it’s not clear (to me) that the alpha channel gives the physical intensity. If we assume instead that the alpha channel is roughly proportionnal to the energy, then taking

i1 = 1-x and i2 = x,

(as I did in my code), it would give the correct answer.

In any case, I’d be curious to know whether the bare eye is able to tell the difference between the two formulas, and to tell which one ‘looks nicer’ (which, afterall, is what we want).

EDIT: after some googling, it seems that most people in computer vision use directly the alpha value as a indicator of “luminous intensity” (which, I called energy, I think). And they simply take the area proportion of pixel to compute the alpha. In this case, my second formula (without the sqrt) would be correct.

Gamma comes into this, and you don’t seem to be taking it into account. The nominal gamma differs between platforms, specifically Apple uses a different gamma (1.8) from virtually everybody else (roughly 2.2), so if you are trying to be very precise then this is a factor.

However I think you’ll find that most people simply consider gamma as cancelling out the non-linearity of the eye (it doesn’t, accurately, but it’s a convenient assumption) and set alpha equal to the proportion of the pixel inside the shape.

If you really want to get technical, and I’m not sure that this is the right forum for that, then you need to be considering the frequency domain as well as the spatial domain. The principal purpose of antialiasing, after all, is to reduce aliases (the clue is in the name!) and aliasing is fundamentally a frequency-domain phenomenon. It arises because of the presence of frequencies in the source signal which cannot be represented in the sampled signal (typically they exceed the Nyquist Limit of half the sampling frequency).

If you want to achieve ultra-precise antialiasing then pretty much the only way is to supersample the source image to a much higher sampling frequency than you ultimately require (in both X and Y directions) then pass that supersampled image through a 2-dimensional low-pass filter designed to remove the frequencies that would result in aliasing.

That’s computationally a massive overhead, and cannot generally be justified, but it’s useful to be reminded that this is where the mathematics leads. But I think we should probably spare the other members of this forum further diversion down that road!

that’s correct, héhé :wink:

Only in the special case of 1-dimensional antialiasing that you described. I have referred you to the Stack Overflow question about the proportion of a rectangle intersected by a circle, which is the 2-dimensional calculation you need to perform to discover (accurately) the alpha. It cannot be expressed as a function of the distance of the centre of the pixel from the centre of the circle.

I made a small experiment on this (1D case). I moved a pixel along a horizontal line, positionning the pixel at subpixel precision by anti-aliasing, and compared with the same pixel moving only at integer coordinates. (of course, everything enlarged, otherwise it’s difficult to see). I tested several formulas for the alpha. I was suprised that it’s easy to see the difference! My impression is:

  • with alpha = proportion of pixel, the perceived intensity of the pixel kind of flickers: when the pixel is evenly spread out on 2 pixels, the intensity seems quite too low.

  • with alpha = square root of the proportion, the perceived intensity is much more stable. The downside is that the moving pixel appears a little bit “fatter” than the integer pixel.

As you said, if I wanted to understand this, the gamma correction has to be taken in into account, I didn’t check the precise formula. A heuristic guess for the best formula would be an intermediate power law like (proportion of pixel)^(3/4)…

I tried to search online if more serious tests have been made, but was not able to find out.

Note: it might very well be that the ‘best’ formula for moving pixels is not the same as the ‘best’ formula for still images, due to persistence of vision.

In my experience, supersampling and then using a convolution filter results in “blur”. It’s very good if the width of the convolution kernel is large, giving really nice blurring. I’m not sure it’s the best technique for ‘crisp’ anti-aliasing, where typically you want a transition region of at most 1 or 2 pixels. For instance, for font antialiasing, for very large characters a standard linear filter is ok, but for tiny characters where each pixel counts, the ‘hinting’ if often done ‘by hand’.

Maybe what it means is that if you want to smooth out a filled object (circle), a linear filter is good, but for a thin object (a non-filled circle of 1-pixel width) it will not give the best-looking result. Does this make sense?

Reducing ‘aliasing’ whlist at the same time preserving ‘sharpness’ involves a compromise, and is highly subjective. No two people will agree on what is the optimum balance, and it depends on the shape and size of the object being drawn. I therefore prefer to make a judgement based on the mathematics (for example attenuating aliases by a certain number of dB), because at least that’s objective!

Although it is more relevant to continuous-tone images rather than ‘graphics’ you might like to look at my paper A Frequency Domain approach to Scaling which discusses the issue of aliasing versus sharpness. You may also be interested in the Filter Synthesis program I developed to create optimised low-pass FIR filters (I spent a considerable proportion of my career specialising in these subjects!).

But I still think this forum is not the right place for this discussion. Judging my SDL2_gfx extensions on the basis of the quality of the antialiasing is completely missing the point. I added what I considered to be ‘missing’ functions, in keeping with the quality standards of the original SDL2_gfx, which was never intended to be very precise. I hope that the usefulness of the library has been improved as a result.

I never judged your extension on the basis of anti-aliasing quality! On the contrary you made me interested in this subject. My initial concern was just about speed, but this is very minor. But I enjoyed the discussion because I discovered many interesting things about anti-aliasing. I agree it can be boring for others, but I still want to thank you for this. Promise, this is my last post on this topic :wink:

1 Like

These look very interesting, thanks! I have had some of these missing functions on my list for a while. I’m going to test these out in a our codebase and see how they go.

Are you planning to submit these patches upstream? I have made a patch to the SDL2-gfx 1.0.4 package based on your code, with a few changes to your code (eg. include changes, path and not including the non-gfx related code).

I see the discussion on sqrt(), cos() and sin(), but there also seem to be a lot of calls to malloc(). A reusable buffer might be a good idea there – The underlying SDL_Render* functions are documented as being not-multithreaded, so a static could be an option.

I did see one artefact in aaArcRGB, see attached.

39

What were the exact parameters that gave rise to this artefact? What platform are you running it on? As you may have seen, I’ve included a workaround in the code for SDL_RenderDrawLine() not reliably including the endpoints on Linux, despite the documentation saying it does (I call SDL_RenderDrawPoints() instead) and it’s possible that this needs to be enabled on some other platforms as well.

Here’s what I get with a similar arc, which shows no artefacts:

aaArcColor(renderer, 214.0, 90.0, 100.0, 100.0, 0.0, 180.0, 20.0, 0xFFFFFFFF);

aaarc428x292

I did see the workaround in the code for the SDL_RenderDrawLine() endpoints. If there is a bug, then it would seem nice to have it fixed. I’ll take a closer look at the codepath, but I target FreeBSD and macOS at the moment, so if the work around is relevant, it’s not in my build.

This test was on macOS with these parameters:

aaArcRGBA(renderer, 1000.0, 1000.0, 200.0, 200.0, 1.0, 180.0, 20.0, 255, 255, 255, 255);

When the “start” parameter is 0.0, I do not see the same artefact. The artefact also changes with changing the coordinates.

Rebuilding with the #ifdef LINUX workaround enabled eliminates the artefact on macOS. I tried a Metal capable system and a non-Metal capable system, also also tried with SDL_SetHint(SDL_HINT_RENDER_DRIVER, "opengl");

Have you investigated the underlying problem in SDL_RenderDrawLines()? I had a quick look through the backend code, and it seems some backends add 0.5f to the point values and some don’t.

See SDL_RenderDrawLine endpoint inconsistency

Interesting. And I’m seeing it on macOS. I’ll add it to my list for a closer look.

I’ve never encountered that, and it suggests there is more than one cause. If OpenGL is so variable that one cannot be certain what pixels will be drawn, it is going to be difficult for SDL_RenderDrawLine() to be implemented consistently across platforms.

If you want another reference, check out SDL_gpu’s implementation: https://github.com/grimfang4/sdl-gpu/blob/e3d350b325a0e0d0b3007f69ede62313df46c6ef/src/renderer_shapes_GL_common.inl#L317

It uses an incremental circle algorithm that avoids sqrt, sin, and cos in the loop. License is liberal, so if it helps, make SDL_gfx better!

Hello! Thank you for your great work @rtrussell

I was wondering if the updates to SDL_RenderDrawLine and recent SDL additions like SDL_RenderGeometry()could lead to some performance optimizations in the SDL2_gfx library and your extensions?

Would be happy to try and contribute to this effort if it seems like there could be performance gains. I’m still new to graphics programming but it seems that RenderGeometry() could prove useful for filled polygons and maybe polygons with border thickness (e.g. Drawing Polylines by Tessellation - CodeProject)

Only in the case of the Emscripten/WebAssembly version do I currently take advantage of knowing that SDL_RenderDrawLine() can be trusted to be pixel-perfect:

#ifdef __EMSCRIPTEN__
		result = SDL_RenderDrawLine (renderer, x1, y1, x2, y2) ;
#else
...
#endif

I hadn’t considered that SDL_RenderGeometry() might be useful but it’s an interesting idea.

1 Like

I hadn’t considered that SDL_RenderGeometry() might be useful but it’s an interesting idea.

If I can find some time this coming week, I want to take a shot at writing a filled ellipse, filled polygon, and polyline routine using RenderGeometry(). If anyone has recommendations for any open-source triangulation C libraries for arbitrary non-intersecting polygons I’d appreciate it :smiley:

I’m curious if it would outperform the current implementations in some cases.