I’ve been experimenting with
SDL_RenderDrawLineF() (new to SDL 2.0.10) and I’m not getting the results I expected. For example if I do this (using the OpenGL backend):
SDL_RenderDrawLineF(renderer, 3.5, 3.0, 9.5, 3.0);
I had expected that the first and last pixels would have a reduced amplitude, because of their sub-pixel coordinates, but that’s not what I see. Every pixel is either fully ‘on’ or fully ‘off’, just as it would have been with the integer version of the function. Changing the blend mode with
SDL_SetRenderDrawBlendMode() didn’t seem to make any difference.
What am I missing here?
Try this before creating the renderer:
Graphics driver quality settings can also affect the anti aliasing of lines.
I just did a test and found the above hint made no difference to lines, only textures. The only way to make lines antialias was to mess with the NVIDIA control panel settings.
Oh. Does that imply that of the new float functions only
SDL_RenderCopyExF() are likely to offer any benefit over the integer versions, without fiddling with graphics card settings?
The integer functions are the same as the F functions, just the integer ones will be cast to floats, so technically F functions are slightly faster.
I think the original integer functions even call the new F functions.
SDL_RenderDrawLinesF(). But even if the F functions are slightly faster there’s little point in me using them if there’s no antialiasing. I think this should be mentioned in the docs (if those functions are ever documented!).
They’re both subject to the user’s antialiasing settings. There’s no disadvantage to using the F functions. Diagonal lines with points that don’t fall between pixels will still be antialiased the same with with either the F functions or integer functions.
There can be. In my case I’m calling the functions from BBC BASIC, and passing 32-bit float parameters is a significant overhead in that language because it’s not a native data type (compared with a 64-bit double, which is). So if I’m getting no benefit, it’s easier and faster to pass integers.
I suppose it depends on why you want the antialiasing. If it’s to reduce ‘jaggies’ then antialiasing diagonal lines with integer endpoints has some value, but if it’s to achieve sub-pixel positioning it doesn’t. Personally I would prefer antialiasing and float coordinates always to go together and not to have one without the other.
Most (all?) OpenGL implementations (as well as DX, Vulkan, Metal, etc.) don’t anti-alias lines unless you turn on MSAA or something.
The value in float coordinates is not having to worry about truncation or whatever resulting in gaps, and it’s easier most of the time to just do all your coordinates as floats, especially when transformations and matrixes get involved.
Float coordinates and anti-aliasing are completely separate things, and I’ve never seen the assumption that floats == AA before.
With respect, they’re not “completely separate things”. If by “anti-aliasing” one means the introduction of sample values other than zero or one on edges, which is probably how most people think of it in the computer graphics context, then one cannot achieve sub-pixel positioning of graphics elements without anti-aliasing, and one cannot calculate the sample values required to achieve anti-aliasing without knowing the position of the graphics elements to sub-pixel resolution. To that extent they are inextricably connected.
Of course this is not an accurate use of the term ‘aliasing’, which really only has a precise meaning in the frequency domain, where it refers to what happens when a signal is sampled at an insufficiently high frequency, i.e. there are spectral components in the original signal in excess of the Nyquist Limit of half the sampling frequency. But in my experience programmers’ eyes tend to glaze over when you try to explain anti-aliasing in terms of sampling theory!
I believe turning on multi-sampling will achieve the desired effect of sub-pixel positioning. It is just the standard way of doing both that and anti-aliasing on GPU in general (since these things are obviously very closely intertwined). In contrast the old integer API would not have allowed sub-pixel positioning even with multi/super-sampling on.
You could do something else clever for 2D specifically, like what nanovg does for example, but there are trade-offs and no clear winners, so it’s hard to require such a solution from a low level wrapper library. The “clever” approach can be more precise and have low performance impact for simple geometries, but the impact will increase with the complexity, while multi-sampling has more or less fixed cost regardless of complexity, so if the GPU can handle it for a specific resolution it can almost seem free/zero cost.
Anti-aliasing is a visual effect used to create the illusion that the display has more resolution than it actually does during rasterization.
Sub-pixel precision is separate, in that it is used to determine where geometry falls within a given pixel. This is then used to determine whether or not it covers enough of the pixel that the pixel should be drawn, regardless of anti-aliasing. Without sub-pixel precision you get stuff like one pixel holes in 3D geometry where polygons are touching, because the GPU doesn’t have enough internal precision during rasterization. Different anti-aliasing techniques can use sub-pixel accuracy to achieve the desired effect, but it isn’t strictly required for AA.
Neither of these has to do with whether or not the coordinates you pass to SDL are ints or floats, or whether or not the GPU will draw anti-aliased lines without AA being turned on.
If you want anti-aliasing, turn it on.
Sorry to nit-pick, but I would disagree that anti-aliasing in general is just making it look higher resolution. In software rendering or some of those clever shader techniques in GPU rendering the approach is quit different - intensity of each pixel/fragment is adjusted based on portion of geometry covering it(or some approximation of that). This approach often produces better results than super/multi-sampling and in essence is another application of sub-pixel positioning/precision.
In the realm of GPUs, the clever techniques are not that effective or popular so higher resolution tricks is what is most often used, cause it lends itself well to the GPU architecture.
That’s why I said AA can use sub-pixel precision to get the desired effect. But ultimately the goal of AA is to hide the fact that the display only has so much resolution. On a display with pixels too small to be individually seen (and matching rendering resolution), AA becomes unnecessary.
My ultimate point, though, was that using float coordinates or having sub-pixel precision doesn’t imply AA.
Yes, but eyes are a tricky thing, and even if you can’t see the individual pixels you might be able to see some aliasing effects. I haven’t tested it myself, but a quick search reveals that many people have and arrived to the conclusion that AA is still necessary. Purely theoretically in many edge case scenarios clever AA techniques are going for what would be an equivalent of an infinite resolution not just higher resolution, and in their implementation there is no notion of higher resolution, but sub-pixel precision is there. The concept are closely related, so I wouldn’t blame someone who doesn’t want to be concerned with details of specific implementation to expect one to follow from the other. The higher resolution mindset is very much a GPU mindset. Of course not surprising at all in context of low level library like SDL, so sorry again for nitpicking
There’s no “illusion”. Sampling theory shows that a sampled signal, such as a 2D digital image, does indeed have infinite spatial resolution (ignoring amplitude quantising).
Sorry, it’s not. The two things are just different ways of looking at the same phenomenon, i.e. the representation of a continuous signal by means of samples.
For my sins, I worked for 33 years in a Research & Development department in which I specialised in this area, both in respect of digital video (which involves sampling a continuous signal) and computer graphics (typically the synthesis of sample values which correspond to what that sampled signal would ideally be).
I don’t know if it will be at all helpful, but several years ago I wrote this paper which attempts to explain some of the principles behind sampling theory, particularly in the context of image scaling, in non-mathematical terms.
Your monitor, however, does indeed have finite spatial resolution, which anti-aliasing in terms of GPU rendering, in the context of getting lines without jaggies on your computer monitor, is intended to hide.
We’re talking about hardware-accelerated rasterization, which involves some concepts from signal processing, definitely. But when talking about sub-pixel precision in rasterization, such as drawing a line on the screen in your case, it usually just means deciding at a higher resolution whether or not a pixel should be filled. Whether or not some sort of blending operation or amplitude adjustment should be performed if the GPU decides to fill the pixel, and whether to involve pixels only partly covered, is a separate matter that you have to explicitly request the GPU perform (or do yourself).
Anyway, this is a dumb argument. SDL passes the coordinates you give it along to the GPU and lets the GPU do the line drawing (unless you have output scaling turned on).
In a signal-processing sense, the monitor is the DAC (digital-to-analogue converter) and if it’s doing its job properly it converts each sample value into the brightness of the associated pixel. It’s still sampled, but now it’s an optical signal rather than electrical. Just as Sampling Theory shows that the sampled electrical signal has infinite spatial resolution, so does the sampled optical signal!
The next stage in the process of reconstituting the continuous signal is the ‘reconstruction filter’: a low-pass filter which passes frequencies below the Nyquist Limit but not those above. It’s perfectly true that most monitors do not contain an optical reconstruction filter (in contrast with high-end image sensors which do include an equivalent filter) so that function is performed - more or less poorly - by the eye.
So the reconstruction may not be perfect, and indeed some aliasing may remain from that cause, but nevertheless the near-infinite spatial resolution is preserved. I know this is counter-intuitive, but the mathematics doesn’t lie. It’s precisely because of the insight it can give into some of these fundamentals that an understanding of sampling theory is valuable.
Let’s all be friends . It sounds like you guys are getting lost in semantics.
I’d like to encourage the use of the float functions over the integer ones (which have no advantage and will probably be removed in the next big release of SDL). SDL will convert to floats at the end anyway.
It’s the end-user’s choice to make the lines anti-alias. Including diagonal lines drawn with the integer functions - you cannot prevent this if the user wants to configure their graphic driver settings to draw non-jagged lines. The float functions let you place the start and end points between pixels. The effect of this also depends on how the user chooses to set their graphic drive settings.
I’m not sure if @rtrussell specifically wants to draw pixel-art lines? Or just avoid the overhead of working with floats? Integer functions convert to floats anyway, so you can’t avoid it. It’ll help to know what end result you’re looking for?
I wrote my own extensions to the SDL2_gfx library to draw antialiased figures (ellipses, polygons, etc.) which call only the standard integer functions. So I have no need for the new float functions (until all the Linux repositories have updated to SDL 2.0.10 or later I can’t even assume that those functions will be available to my users).
However my SDL2_gfx extensions rely crucially on being able to draw horizontal and vertical lines with a precise position and length.
SDL_RenderDrawLine() should be suitable because the docs explicitly state that the line includes both end points, but in practice that has never been reliable. On Linux there is code in SDL which explicitly causes one end point to be omitted, and a bug in SDL 2.0.10 means that one end point is omitted (actually plotted in the wrong place) on all platforms!
So basically that means I cannot trust
SDL_RenderDrawLine() to plot precisely the pixels I need it to, and currently I am calling
SDL_RenderDrawPoints() simply to draw a horizontal or vertical line! This is very wasteful and potentially slow, so my interest in
SDL_RenderDrawLineF() was in part to see if it could be relied upon to plot exactly the same pixels on every platform. To the extent that its behavior depends on graphics card settings it would seem not.
I’ve always felt the line functions were an afterthought and just there for debugging and not official use, because they’ve never really worked well. They never responded well to the SDL_RenderSetLogicalSize family of functions either, causing points and straight lines to be thicker / thinner, but diagonal lines stay the same. They’ve always looked different across renderers, too. No one’s given them the attention they need to fall in line (haha). They’ve been improved dramatically in the latest source though (we discussed this before). You should raise any remaining bugs with bugzilla.libsdl.org to help the contributors can get these line routines working flawlessly.
In the meantime, embracing the F functions will do no harm. It seems the missing pixels aren’t related to anti aliasing or end points falling between pixels, so user’s video driver settings aren’t affecting consistency (just jagged vs smooth lines).