What about single-buffering?

Pour permettre à plus de lire ce message, ainsi que pour promouvoir cette belle langue qu’est le français, ce message est bilingue. / To allow more people to be able to read this message, as well as to promote french, this message is bilingual.

Hi everybody, It’s my first post on this forum!
I have been using SDL for a while, but, as of its intimidating super long GNU GPL license, I decided to give SDL2 a go. So I downloaded it. However, I quickly realised that the rendering system was completely different. It used all this SDL_Renderer stuff instead of the old SDL_Surface.
But the SDL_Surface system still existed. It had changed a little(Ex: SDL_Flip() got replaced by SDL_UpdateWindowSurface()), but that did not cause me any trouble.

To test SDL2, I wanted to create a little emulator. I was glad I had chosen SDL_Surfaces, as it turned out the SDL_Renderer system hadn’t any good way of doing pixel manipulation. I could have created a surface and then converted it to an SDL_Texture and then render that SDL_Texture, but I didn’t see the purpose of doing these extra steps. Plus, as far as I knew, the system I was making an emulator for(Chip8) had not anything like double buffering, and copying the surface would create some kind of double buffering. So I obviously didn’t want that.

I ran my little emulator once done. It seemed to work perfectly fine. But then came a twist. It was double buffered, exactly what I did not want.

I quickly realised that SDL_Surfaces were double-buffered by default or something like that. Now, that was a nightmare! I searched everywhere, but I could not find how to do single buffering on SDL2, whether using SDL_Surface or SDL_Renderer.

So, is it possible to do it? How can I do single-buffering?


Bonjour tout le monde, c’est mon premier message sur ce forum!
Voilà assez longtemps que j’emploie SDL. Son seul désavantage était sa license, la GNU GPL, qui était intimidante du fait de sa taille. C’est pourquoi j’ai voulu tester SDL2, qui était sous la ZLIB. Je l’ai donc téléchargé. C’est rapidement que j’ai constaté que le système de rendu était complètement différent. Au lieu de ces bon vieux SDL_Surface, SDL2 employait un nouveau système, les SDL_Renderer.
Et cependant, le vieux système de surfaces existait encore, à ma grande joie. Il avait changé un peu(Ex: SDL_Flip() s’est fait remplacer par SDL_UpdateWindowSurface()), mais ce n’était pas ce qui allait m’arrêter.

Pour tester SDL2, j’ai voulu faire un émulateur. Il s’avéra que le système de SDL_Renderer n’avait aucune façon pratique de faire de la manipulation de pixels, et je me suis félicité d’avoir choisi les surfaces. J’aurais pu, bien sûr, convertir ma surface en SDL_Texture, puis copier cette texture à l’écran, mais pourquoi m’embarrasser de ces étapes supplémentaires? Et puis, pour ce qu’en j’en sais, le système que je veux émuler(Chip8), n’a pas de double tamponnage, et copier la surface aurait créé un genre de double tamponnage. Je ne voulais définitivement pas cela.

Une fois terminé, j’ai testé mon émulateur. On aurait dit qu’il fonctionnait correctement. Mais là, l’inattendu est arrivé. Et sans frapper, en plus. Eh oui, mon rendu était doublement tamponné.

Il ne me fallut pas longtemps pour découvrir que les SDL_Surfaces étaient doublement tamponnées par défaut ou quelque chose dans ce genre là. C’était un cauchemar! J’ai cherché partout sans rien trouver pour désactiver cela, que ce fut pour les SDL_Renderers ou les SDL_Surfaces.

Alors, est-ce possible? Puis-je désactiver ce double tamponnage? Utiliser un seul écran et pas deux?


Je vous remercie en avance pour vos réponses. / I thank you in advance for your answers.

Is there a windowing system out there that does single buffering? Probably not any modern ones.

Why would you want users to see what you’re drawing as it’s being drawn (with potentially a lot of flicker) instead of presenting the whole frame once it’s complete?

It’s actually a quite cool visual effect, though doing a proper animation system is more likely the correct way to go.

Well… Because I’m trying to create a Chip8 interpreter which actually looks like the real thing? Chip8 did not use double-buffering, as far as I know!
And, anyways, even thought it may not always be wanted, I could imagine a lot of scenarios where it would be wanted. In my case for emulation.

So, is there a way in SDL2?
Otherwise, I might be changing to Allegro.

I’m wondering if there is a misunderstanding here. The only effect of double-buffering that I would expect you to notice on the video output is a delay of an extra frame period between the rendering action in your program and the actual display device updating. Unless you are building a VR application where rendering-to-display latency is critical I wouldn’t expect such a small delay to matter, certainly not in your emulation scenario.

Is it possible that what you describe as single-buffering is in fact the ability to ‘build up’ the contents of the display, element by element, over a period of time like old-school computer graphics? If so you can certainly do that in SDL2 using a ‘target texture’. The trick is to render into the target texture (not the default render target which gets cleared every frame) and only to clear it when you want to.

1 Like

I could have created a surface and then converted it to an SDL_Texture and then render that SDL_Texture, but I didn’t see the purpose of doing these extra steps.

That is actually the proper way to do this. This is because SDL2 uses the GPU for presenting to the screen now. In order to use the GPU, your pixel data must be in video memory first, and copying it from an SDL_Surface into an SDL_Texture puts it into video memory. All emulators do this. There is no way around it, even if you used Allegro.

The SDL 2 migration guide has sections for getting fully software-rendered frames to the screen or blitting surfaces to the screen.

Uh… What I mean is that, when a Chip8 program actually draws in the screen, it should be seen while drawing. It should not be a flicker-free thing were you draw all to a backbuffer and then swap buffers.
It should be drewn directly on the screen surface.

You say that like if it was necessary to put my data into video memory. But why add this extra conversion step? My program can render without doing that. What is the point of doing rendering using GPU? Plus, how would that help me doing single-buffering? If I copy my surface to a texture, I’m gonna have to blit it, which would get me a step farther from directly drawing into the screen memory. There must be some way to do that.

What I mean is not that Allegro only uses CPU. I mean Allegro seems to have some way to do single-buffering(through al_set_new_display_option()). Wasn’t that clear?

Basically, your screen is plugged in through your GPU, so the GPU is sending the signal into the wire that makes the screen show a picture. A modern device simply doesn’t allow the kind of ultra primitive access you’re wanting.

Seriously? There is no way to do it?
So how does programs like DosBox, for example, achieve it?

In SDL terms:

  • Render to a Surface for one real world frame time (usually 1/60th of a second).
  • Upload the Surface to a Texture and immediately display that, however much progress you have.
  • Continue the actual rendering process with your Surface until the picture is complete.

You have to slow it way down to make it look right, but that’s the basic idea.

Oh, that is deceiving…
So there isn’t any better solution than that?
And, if there aren’t, do you think would it be possible for this kind of feature to be added?

EDIT:
Just discovered the function SDL_UpdateWindowSurfaceRects(). It may go a little faster. And, by pure curiosity: Is it possible to do rendering using nothing but CPU on a modern computer? And how does DosBox(Which uses SDL1) achieves single-buffering?

SDL1 and/or DOSBox can’t (visibly) update the screen more than 60 times per second (on a 60Hz screen) either.
If you disable vsync you can render as many frames as you want to (limited by your systems performance) and get all the ugly tearing you want.

You can make CPU rendering fast enough to be usable, but it takes some work, and there’s usually not much point. It’s interesting for educational purposes, but not what you want to be shipping to others.

Thank you, but how do I do that without having to call SDL_UpdateWindowSurface or anything like that? Like, I mean, not drawing on a backbuffer and updating the contents of a frontbuffer but directly drawing on a frontbuffer?
And, if not possible, how can I turn off vsync with SDL2? Will I have to migrate to SDL_Renderers to do it?

DOSbox does what every other emulator does, whether using OpenGL, SDL surfaces, or whatever: build up a single frame and then put it on the screen, either through blitting or page flipping. Don’t conflate double buffering with vsync.

That’s just how modern displays work: once every 1/60th of a second (or however long), your GPU sends a frame to your monitor. Even if SDL had single buffered windows, AFAIK modern windowing systems don’t give you unbuffered writes to the window contents, because of the need to involve the window compositor.

Emulators have been working just fine with normal double-buffered displays for a long time.

edit: To write a Chip8 emulator that included stuff like screen flicker and objects flashing, you’d have to concern yourself with stuff like how long putting pixels on screen for the Chip8 took and emulate that over however many frames. Even with single buffering, a modern computer would not have the same kind of flicker that a Chip8 had.

edit 2: YouTube is full of videos of people’s Chip8 emulators. Maybe you could see how they did theirs.

1 Like

I just found a video by somebody named Alexander Osipchuk. A link to the source was added in the video’s description.
Just posting it here in case somebody is interested:
Source code of that emulator(C++ & SDL2)

Still, by pure curiosity:
Is it possible to render an image without drawing offscreen? That is, draw directly to the target surface?
I just ended up on this raycasting tutorial here with some online demoes.

The second demo does not feature double buffering, so it runs kind of flickery.
The third demo, however, adds double buffering.

Which basically demonstrate this kind of effect IS possible on a modern machine. So, is it possible to use only one screen buffer in SDL2?

Please, could anybody out here answer me? Not like if I really need single buffering, but it would be nice knowing if SDL2 is able to do it or not. It might come handy one day.

No, SDL2 doesn’t do single buffering.