Drawing lines with float coordinates

Most (all?) OpenGL implementations (as well as DX, Vulkan, Metal, etc.) don’t anti-alias lines unless you turn on MSAA or something.

The value in float coordinates is not having to worry about truncation or whatever resulting in gaps, and it’s easier most of the time to just do all your coordinates as floats, especially when transformations and matrixes get involved.

Float coordinates and anti-aliasing are completely separate things, and I’ve never seen the assumption that floats == AA before.

With respect, they’re not “completely separate things”. If by “anti-aliasing” one means the introduction of sample values other than zero or one on edges, which is probably how most people think of it in the computer graphics context, then one cannot achieve sub-pixel positioning of graphics elements without anti-aliasing, and one cannot calculate the sample values required to achieve anti-aliasing without knowing the position of the graphics elements to sub-pixel resolution. To that extent they are inextricably connected.

Of course this is not an accurate use of the term ‘aliasing’, which really only has a precise meaning in the frequency domain, where it refers to what happens when a signal is sampled at an insufficiently high frequency, i.e. there are spectral components in the original signal in excess of the Nyquist Limit of half the sampling frequency. But in my experience programmers’ eyes tend to glaze over when you try to explain anti-aliasing in terms of sampling theory!

I believe turning on multi-sampling will achieve the desired effect of sub-pixel positioning. It is just the standard way of doing both that and anti-aliasing on GPU in general (since these things are obviously very closely intertwined). In contrast the old integer API would not have allowed sub-pixel positioning even with multi/super-sampling on.

You could do something else clever for 2D specifically, like what nanovg does for example, but there are trade-offs and no clear winners, so it’s hard to require such a solution from a low level wrapper library. The “clever” approach can be more precise and have low performance impact for simple geometries, but the impact will increase with the complexity, while multi-sampling has more or less fixed cost regardless of complexity, so if the GPU can handle it for a specific resolution it can almost seem free/zero cost.

Anti-aliasing is a visual effect used to create the illusion that the display has more resolution than it actually does during rasterization.

Sub-pixel precision is separate, in that it is used to determine where geometry falls within a given pixel. This is then used to determine whether or not it covers enough of the pixel that the pixel should be drawn, regardless of anti-aliasing. Without sub-pixel precision you get stuff like one pixel holes in 3D geometry where polygons are touching, because the GPU doesn’t have enough internal precision during rasterization. Different anti-aliasing techniques can use sub-pixel accuracy to achieve the desired effect, but it isn’t strictly required for AA.

Neither of these has to do with whether or not the coordinates you pass to SDL are ints or floats, or whether or not the GPU will draw anti-aliased lines without AA being turned on.

If you want anti-aliasing, turn it on.

1 Like

Sorry to nit-pick, but I would disagree that anti-aliasing in general is just making it look higher resolution. In software rendering or some of those clever shader techniques in GPU rendering the approach is quit different - intensity of each pixel/fragment is adjusted based on portion of geometry covering it(or some approximation of that). This approach often produces better results than super/multi-sampling and in essence is another application of sub-pixel positioning/precision.

In the realm of GPUs, the clever techniques are not that effective or popular so higher resolution tricks is what is most often used, cause it lends itself well to the GPU architecture.

That’s why I said AA can use sub-pixel precision to get the desired effect. But ultimately the goal of AA is to hide the fact that the display only has so much resolution. On a display with pixels too small to be individually seen (and matching rendering resolution), AA becomes unnecessary.

My ultimate point, though, was that using float coordinates or having sub-pixel precision doesn’t imply AA.

Yes, but eyes are a tricky thing, and even if you can’t see the individual pixels you might be able to see some aliasing effects. I haven’t tested it myself, but a quick search reveals that many people have and arrived to the conclusion that AA is still necessary. Purely theoretically in many edge case scenarios clever AA techniques are going for what would be an equivalent of an infinite resolution not just higher resolution, and in their implementation there is no notion of higher resolution, but sub-pixel precision is there. The concept are closely related, so I wouldn’t blame someone who doesn’t want to be concerned with details of specific implementation to expect one to follow from the other. The higher resolution mindset is very much a GPU mindset. Of course not surprising at all in context of low level library like SDL, so sorry again for nitpicking :smiley:

There’s no “illusion”. Sampling theory shows that a sampled signal, such as a 2D digital image, does indeed have infinite spatial resolution (ignoring amplitude quantising).

Sorry, it’s not. The two things are just different ways of looking at the same phenomenon, i.e. the representation of a continuous signal by means of samples.

For my sins, I worked for 33 years in a Research & Development department in which I specialised in this area, both in respect of digital video (which involves sampling a continuous signal) and computer graphics (typically the synthesis of sample values which correspond to what that sampled signal would ideally be).

I don’t know if it will be at all helpful, but several years ago I wrote this paper which attempts to explain some of the principles behind sampling theory, particularly in the context of image scaling, in non-mathematical terms.

Your monitor, however, does indeed have finite spatial resolution, which anti-aliasing in terms of GPU rendering, in the context of getting lines without jaggies on your computer monitor, is intended to hide.

We’re talking about hardware-accelerated rasterization, which involves some concepts from signal processing, definitely. But when talking about sub-pixel precision in rasterization, such as drawing a line on the screen in your case, it usually just means deciding at a higher resolution whether or not a pixel should be filled. Whether or not some sort of blending operation or amplitude adjustment should be performed if the GPU decides to fill the pixel, and whether to involve pixels only partly covered, is a separate matter that you have to explicitly request the GPU perform (or do yourself).

Anyway, this is a dumb argument. SDL passes the coordinates you give it along to the GPU and lets the GPU do the line drawing (unless you have output scaling turned on).

In a signal-processing sense, the monitor is the DAC (digital-to-analogue converter) and if it’s doing its job properly it converts each sample value into the brightness of the associated pixel. It’s still sampled, but now it’s an optical signal rather than electrical. Just as Sampling Theory shows that the sampled electrical signal has infinite spatial resolution, so does the sampled optical signal!

The next stage in the process of reconstituting the continuous signal is the ‘reconstruction filter’: a low-pass filter which passes frequencies below the Nyquist Limit but not those above. It’s perfectly true that most monitors do not contain an optical reconstruction filter (in contrast with high-end image sensors which do include an equivalent filter) so that function is performed - more or less poorly - by the eye.

So the reconstruction may not be perfect, and indeed some aliasing may remain from that cause, but nevertheless the near-infinite spatial resolution is preserved. I know this is counter-intuitive, but the mathematics doesn’t lie. It’s precisely because of the insight it can give into some of these fundamentals that an understanding of sampling theory is valuable.

Let’s all be friends :slight_smile:. It sounds like you guys are getting lost in semantics.

I’d like to encourage the use of the float functions over the integer ones (which have no advantage and will probably be removed in the next big release of SDL). SDL will convert to floats at the end anyway.

It’s the end-user’s choice to make the lines anti-alias. Including diagonal lines drawn with the integer functions - you cannot prevent this if the user wants to configure their graphic driver settings to draw non-jagged lines. The float functions let you place the start and end points between pixels. The effect of this also depends on how the user chooses to set their graphic drive settings.

I’m not sure if @rtrussell specifically wants to draw pixel-art lines? Or just avoid the overhead of working with floats? Integer functions convert to floats anyway, so you can’t avoid it. It’ll help to know what end result you’re looking for?

I wrote my own extensions to the SDL2_gfx library to draw antialiased figures (ellipses, polygons, etc.) which call only the standard integer functions. So I have no need for the new float functions (until all the Linux repositories have updated to SDL 2.0.10 or later I can’t even assume that those functions will be available to my users).

However my SDL2_gfx extensions rely crucially on being able to draw horizontal and vertical lines with a precise position and length. SDL_RenderDrawLine() should be suitable because the docs explicitly state that the line includes both end points, but in practice that has never been reliable. On Linux there is code in SDL which explicitly causes one end point to be omitted, and a bug in SDL 2.0.10 means that one end point is omitted (actually plotted in the wrong place) on all platforms!

So basically that means I cannot trust SDL_RenderDrawLine() to plot precisely the pixels I need it to, and currently I am calling SDL_RenderDrawPoints() simply to draw a horizontal or vertical line! This is very wasteful and potentially slow, so my interest in SDL_RenderDrawLineF() was in part to see if it could be relied upon to plot exactly the same pixels on every platform. To the extent that its behavior depends on graphics card settings it would seem not.

I’ve always felt the line functions were an afterthought and just there for debugging and not official use, because they’ve never really worked well. They never responded well to the SDL_RenderSetLogicalSize family of functions either, causing points and straight lines to be thicker / thinner, but diagonal lines stay the same. They’ve always looked different across renderers, too. No one’s given them the attention they need to fall in line (haha). They’ve been improved dramatically in the latest source though (we discussed this before). You should raise any remaining bugs with bugzilla.libsdl.org to help the contributors can get these line routines working flawlessly.

In the meantime, embracing the F functions will do no harm. It seems the missing pixels aren’t related to anti aliasing or end points falling between pixels, so user’s video driver settings aren’t affecting consistency (just jagged vs smooth lines).

Yes, but not yet in a ‘stable’ release (which will presumably be SDL 2.0.12). The current ‘stable’ SDL 2.0.10 has actually made things worse because of the ‘misplotted end-point’ regression, making it effectively impossible for me to use SDL_RenderDrawLine() (or the float version) for any of my antialiased graphics.

Even when 2.0.12 is released it’s going to be quite a while before the buggy versions have been flushed from the various Linux repositories.

Try persuading the author of SDL2_gfx of that!

I heard a rumour that the SDL devs were looking at using a surface to draw the lines instead of what they’re doing at the moment, which would change everything.

But yes, your problem won’t go away any time soon, but you can start the ball rolling on bugzilla now if it’s important to you.

This is the second time I’ve seen people mentioning problems with SDL_RenderSetLogicalSize recently. This function is broken and should not be used.

I created SDL_RenderSetLogicalSize several years ago and submitted it as a patch. Ryan added it, but he rather badly misunderstood what it’s supposed to be for and reimplemented the whole thing in a completely wrong way, then refused to fix it when I called attention to the problems with it, citing other “valuable” use cases that had nothing to do with what the concept of Logical Size is actually supposed to do. And now problems like this pop up because the function is fundamentally broken.


Yes, and rotated textures get scaled by logical size functions BEFORE being rotated, so they get all distorted and skewed.

Ryan is pretty savvy; give him credit - I’m sure there is a good reason behind his decisions. Changing the logical size functions now will break compatibility of a lot of games.

This is something that needs to be reworked for the next major version for sure.

Given the extreme reluctance even to increment the ‘minor’ version number (every new version is simply a ‘patch’ according to the usual interpretation) I can’t see there ever being another ‘major’ version! But seriously, I do wish SDL used the version numbering properly so one can tell when new functionality is added.

There was a highly specific reason: he wanted to have something that was able to deal with full-screen letterboxing for ports of old games that were written assuming a 4:3 aspect ratio. The problem is, completely rewriting something that was never intended to be for full-screen use in the first place (again, I can say this with authority, because I’m the one who create it) to shoehorn it into one very specific use case ended up breaking it for general use. Also:

Yes, and rotated textures get scaled by logical size functions BEFORE being rotated, so they get all distorted and skewed.

Yeah. There’s the “pretty savvy good reason” at work. :roll_eyes: The original implementation was much, much simpler, just changing the size of the viewport and letting the hardware take care of getting the scaling right. Introducing extra complexity ends up breaking stuff. Who knew?

I use a target texture (rather than rendering to the default target) so it’s trivial to accomodate things like pillarboxing or letterboxing at the point at which the target texture is eventually blitted to the default target, prior to the SDL_RenderPresent().

As a result my app can handle any ‘logical’ screen size (and it needs to because it is running legacy BASIC programs) simply by adjusting the scaling at that final ‘SDL_RenderCopy()’ step, without any need for SDL_RenderSetLogicalSize() or similar functionality.