Tripple buffering

is there any way to get tripplebuffering on sdl and if not are there any
plans to implement this if not perhaps we could get a discussion going on
how it can be acheived?

/ Chrisse

is there any way to get tripplebuffering on sdl and if not are there any
plans to implement this if not perhaps we could get a discussion going on
how it can be acheived?

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

how are you plannng to implement this?.. have been thinking about it and i
mostly come up with problems in implementing it in linux… if you have a
solution perhaps i could help implement it

/ chrisseOn tis, mar 27, 2001 at 07:08:48 -0800, Sam Lantinga wrote:

is there any way to get tripplebuffering on sdl and if not are there any
plans to implement this if not perhaps we could get a discussion going on
how it can be acheived?

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

Yes, it’s planned. Wait for SDL 1.3

how are you plannng to implement this?

There will be a general surface <-> video mode decoupling with several
different ways of describing remote/hardware surface memory. The model
will be general enough that you can use it to describe oversize surfaces
and break them down into multiple flipping pages.

I have an architecture in mind, and I want to get a rough implementation
down so that people can play with it and work on it at that point.
I’m not going to have any time to work on it for the next two months,
so please don’t pester me until then. :slight_smile:

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

Just a quick one… I’m really new to any sort of graphical programming. What
would be the use of tripple buffering over double buffering? I’m not slamming any
one or anything, I’ve just can’t think of a reason for it.
thanks
Jaren

I’ve looked into that a little as well, and it seems rather hard to do if
there isn’t support for hardware pageflipping. Same problem with Utah-GLX as
with all other Linux targets…

As soon as you have to do a back->front blit to “flip” (as Utah-GLX and most
other targets do), you must do it carefully in sync with the retrace, which
is a serious problem in itself! (*)

The next problem with triple buffering in particular, is that you must do the
sync + “flip” operation asynchronously in relation to the rendering thread.
(That’s the whole point with triple buffering…)

I considered doing a kludge solution in Utah-GLX, modulating the level of the
command buffer so that the flips could be squeezed in at the right times.
This would be simplified a lot by another anti-tearing hack of mine; raster
synchronized partial screen refreshing. The idea is to “chase” the raster,
rather than having it breathing down your neck while trying to hit the screen
at the exact right moment. That way, you get at least half a frame period of
jitter tolerance, rather than just the (very, very short, in modern OS timing
measures) retrace period. (I have a working Linux only prototype of that,
which I should clean up and release.)

(*) I think I have a rather generic solution, but it requires more work.
The best place for it seems to be in a stand-alone kernel driver, to
avoid enforcing on a single driver architecture - but for some cards,
that wouldn’t work, as you can’t just jump at the retrace regs at any
time, if some other driver is using the card. Meanwhile, it seems
that it, for some strange reason, cannot be done by the video
drivers themselves for some strange reason, be it general reluctance
to support retrace sync on Linux, or whatever… :-/

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 18:29, Christoffer Gurell wrote:

On tis, mar 27, 2001 at 07:08:48 -0800, Sam Lantinga wrote:

is there any way to get tripplebuffering on sdl and if not are there
any plans to implement this if not perhaps we could get a discussion
going on how it can be acheived?

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

how are you plannng to implement this?.. have been thinking about it and i
mostly come up with problems in implementing it in linux… if you have a
solution perhaps i could help implement it

Triple buffering is about making hard real time softer, basically.

With a double buffered display, your application has to run very tightly
synchronized with the video refresh rate in order to get maximum time for
rendering. Even with hardware pageflipping, with only two buffers, you can’t
flip before you’re completely done with the back buffer, and you can’t start
rendering the next frame before you get it back from the video driver. (That
is, you have to wait until it’s flipped out of display.)

Now, with a third buffer, it’s a bit different. After sending your first
buffer off to the video driver (flip), you’ll get another one to render into,
just as with double buffering. However, after sending that buffer off, no
matter when you do that, there’s still one buffer that no one is using, so
you can start rendering the third frame right away.

Not until you’ve sent the third buffer off to the video driver do you have
to sleep to get a new buffer. When you do that, there will be one buffer
drawing on the CRT and two buffers in line, just like samples in an audio
buffer.

When the buffer currenty displaying is released, our application is probably
waiting for it, and will thus be woken up. At that point, we have two buffers
enqueued for display. That is, we can wait up to two frame periods before
sending the next buffer off to the driver. If that happens, there will be two
buffers enqueued for return to us as soon as the new buffer is switched in
for display, so we could theoretically render two frames in almost zero time
to catch up.

In short, triple buffering makes it possible to pump out a steady full frame
rate display without using a real time OS to get your rendering thread
scheduled in time - and this even if your application sometimes needs more
than 100% of the CPU time to render a frame!

Not totally useless, eh? :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 19:02, Jaren Peterson wrote:

Just a quick one… I’m really new to any sort of graphical programming.
What would be the use of tripple buffering over double buffering? I’m not
slamming any one or anything, I’ve just can’t think of a reason for it.
thanks

Yes, it’s planned. Wait for SDL 1.3

how are you plannng to implement this?

There will be a general surface <-> video mode decoupling with several
different ways of describing remote/hardware surface memory. The model
will be general enough that you can use it to describe oversize surfaces
and break them down into multiple flipping pages.

Sounds nice.

I have an architecture in mind, and I want to get a rough implementation
down so that people can play with it and work on it at that point.

I’ll be there! :slight_smile:

I’m not going to have any time to work on it for the next two months,
so please don’t pester me until then. :slight_smile:

Oh well, there are some serious Linux issues to sort out, so there’s still
work to do for anyone who cares. (I got more than enough anyway, but I
really don’t want to switch to Windows just to get acceptable animation
quality… Will post what I got so far within two weeks, I think.)

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 19:07, Sam Lantinga wrote:

Sam said:

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

Sam, will SDL be able to cook toast for me? :wink:

-bill!

let’s say you just rendered a frame and are ready to reender another but
your double buffer isn’t yet flippet to the screen since the vrt hasnt
occured yet… then you’re program will just have to wait until the screen is
flipped… this is where a third buffer comes in… instead of waiting you
simply render to the third buffer and thereby getting higher framerates :))…

in dos you could acvhive this by hooking the timer interupt so that i’t ran
a routine JUST before the vrt and this routine then flipped the screens for
you if you’d hade time to render a new one… but i really dont know how to
acheive it in linux… perhaps with a sleeping thread or something if there
is a way to make sure it wakes close enough to the vrt and doesn’t miss it…

hope this helped

/ chrisse …On tis, mar 27, 2001 at 10:02:17 -0700, Jaren Peterson wrote:

Just a quick one… I’m really new to any sort of graphical programming. What
would be the use of tripple buffering over double buffering? I’m not slamming any
one or anything, I’ve just can’t think of a reason for it.

A few advantage, sometimes when your rendering a scene to video you have to
wait for vsync of something. In this case, it would render directly to
available memory and then after that buffer and main screen buffer would get
flipped. Since some frames of animation takes longer than others to render,
not having to wait for vsync (SDL_Flip) allows you to take advantage of that
time splice and speed up overall FPS.

For example at 45fps if you double buf and sync to vsync (60hz perhaps) I
think by calling SDL_Flip then you got 1/15th of second you just wait there
and do nothing. In this case it would render the scene again, maybe that
scene is complex and would have taken > vsync to render then you took
advantage of that timesplice and the FPS will increase overall.

This would speed up cases where the FPS < Vsync, I believe. If FPS > Vsync,
then it will always be waiting to sync because the frames of animation run
with plenty of time to spare. I could be wrong…

Thanks,

Matt

Just a quick one… I’m really new to any sort of graphical programming.
What
would be the use of tripple buffering over double buffering? I’m not
slamming any> one or anything, I’ve just can’t think of a reason for it.
thanks
Jaren

[…another explanation of triple buffering…]

This would speed up cases where the FPS < Vsync, I believe. If FPS > Vsync,
then it will always be waiting to sync because the frames of animation run
with plenty of time to spare. I could be wrong…

Well, ideally, that would be the case. However, as we’re generally not
dealing with hard real time operating systems here, you’ll need the extra
slack provided by the extra buffer to ensure that you actually get full frame
rate all the time.

Even if you only use a fraction of the available CPU power, on a normal OS,
you’ll always get an occasional dropped frame for no obvious reason. The risk
of that happening can be reduced significantly if the time critical flipping
(done by the driver) doesn’t have to depend as heavily on the timing of your
application.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 21:10, Matt Johnson wrote:

Thanks to everyone for the explaination. That all sounds very logical and
helpful. I’m jumping on the bandwagon and anxiously awaiting the triple buffer
release :slight_smile:
thanks again
Jaren

David Olofson wrote:> On Tuesday 27 March 2001 19:02, Jaren Peterson wrote:

Just a quick one… I’m really new to any sort of graphical programming.
What would be the use of tripple buffering over double buffering? I’m not
slamming any one or anything, I’ve just can’t think of a reason for it.
thanks

Triple buffering is about making hard real time softer, basically.

With a double buffered display, your application has to run very tightly
synchronized with the video refresh rate in order to get maximum time for
rendering. Even with hardware pageflipping, with only two buffers, you can’t
flip before you’re completely done with the back buffer, and you can’t start
rendering the next frame before you get it back from the video driver. (That
is, you have to wait until it’s flipped out of display.)

Now, with a third buffer, it’s a bit different. After sending your first
buffer off to the video driver (flip), you’ll get another one to render into,
just as with double buffering. However, after sending that buffer off, no
matter when you do that, there’s still one buffer that no one is using, so
you can start rendering the third frame right away.

Not until you’ve sent the third buffer off to the video driver do you have
to sleep to get a new buffer. When you do that, there will be one buffer
drawing on the CRT and two buffers in line, just like samples in an audio
buffer.

When the buffer currenty displaying is released, our application is probably
waiting for it, and will thus be woken up. At that point, we have two buffers
enqueued for display. That is, we can wait up to two frame periods before
sending the next buffer off to the driver. If that happens, there will be two
buffers enqueued for return to us as soon as the new buffer is switched in
for display, so we could theoretically render two frames in almost zero time
to catch up.

In short, triple buffering makes it possible to pump out a steady full frame
rate display without using a real time OS to get your rendering thread
scheduled in time - and this even if your application sometimes needs more
than 100% of the CPU time to render a frame!

Not totally useless, eh? :slight_smile:

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’

Just a quick one… I’m really new to any sort of graphical programming.
What would be the use of tripple buffering over double buffering? I’m not
slamming any one or anything, I’ve just can’t think of a reason for it.

let’s say you just rendered a frame and are ready to reender another but
your double buffer isn’t yet flippet to the screen since the vrt hasnt
occured yet… then you’re program will just have to wait until the screen
is flipped… this is where a third buffer comes in… instead of waiting you
simply render to the third buffer and thereby getting higher framerates
:))…

Nice and short explanation. :slight_smile:

in dos you could acvhive this by hooking the timer interupt so that i’t ran
a routine JUST before the vrt and this routine then flipped the screens for
you if you’d hade time to render a new one… but i really dont know how to
acheive it in linux… perhaps with a sleeping thread or something if there
is a way to make sure it wakes close enough to the vrt and doesn’t miss
it…

Well, that’s the problem: Timing… At least if there isn’t any hardware
pageflipping support.

If there is, you don’t really have to wait until just before the retrace.
Just make sure you’re sufficiently far away from the previous retrace not t
flip one frame to early. That effectively means that you can feed the buffer
pointer for the next frame right after the retrace has ended. (That does
not apply to the hardware scrolling registers on many newer cards, but
that’s another story…)

Now, current Linux drivers aren’t that nice. Back->front blitting is the only
way to flip.

Using RTL, RTAI or even Linux/lowlatency, one could just set a suitable
timing source up and then loop {sleep; flip;}, passing buffer pointers via
lock-free FIFOs or something.

There is a third way though: Blitting only a part of the screen at a time.
I’ve tried it, and it works pretty well, with very good jitter tolerance,
compared to the way it’s usually done.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 21:02, Christoffer Gurell wrote:

On tis, mar 27, 2001 at 10:02:17 -0700, Jaren Peterson wrote:

Sam said:

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

Sam, will SDL be able to cook toast for me? :wink:

Yes!

Heheh
-Sam Lantinga, Lead Programmer, Loki Entertainment Software

that wouldn't work, as you can't just jump at the retrace regs at any
time, if some other driver is using the card. Meanwhile, it seems

I can see how a retrace interrupt and direct access to the hardware was
useful with DOS applications, specifically demos, because in DOS you were
pretty much guaranteed that you were the sole user of the hardware, so you
didn’t have to do any sharing nor prepare for someone else interrupting you.

This was perfect for games and other time-sensitive applications, however
what I don’t understand is why we’re trying to do the same with operating
systems designed to execute many tasks simultaneously.–

Olivier A. Dagenais - Software Architect and Developer

Because we have to, unless we want tearing and unsmooth animation. (No,
multitasking and real time processing are definitely not mutually
exclusive, especially not when it comes to “heavily” buffered things like
audio and video.)

Now, the idea is that the drivers (or even better, the video cards) should
deal with the time critical part of displaying a stream of buffers, providing
a nice interface to the applications.

However, this is not how it works currently, so it’s either accepting the
tearing, or bypassing the drivers to do various tricks from within the
applications. Better ideas are very welcome.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 23:00, Olivier Dagenais wrote:

that wouldn't work, as you can't just jump at the retrace regs at any
time, if some other driver is using the card. Meanwhile, it seems

I can see how a retrace interrupt and direct access to the hardware was
useful with DOS applications, specifically demos, because in DOS you were
pretty much guaranteed that you were the sole user of the hardware, so you
didn’t have to do any sharing nor prepare for someone else interrupting
you.

This was perfect for games and other time-sensitive applications, however
what I don’t understand is why we’re trying to do the same with operating
systems designed to execute many tasks simultaneously.

Cool! :slight_smile: Is there a nice API planned for that as well?

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Tuesday 27 March 2001 22:51, Sam Lantinga wrote:

Sam said:

My goodness, my answer to everything these days seems to be…

Yes, it’s planned. Wait for SDL 1.3

Sam, will SDL be able to cook toast for me? :wink:

Yes!

So we can be kewler ;-)On Tue, 27 Mar 2001, you wrote:

This was perfect for games and other time-sensitive applications, however
what I don’t understand is why we’re trying to do the same with operating
systems designed to execute many tasks simultaneously.


Sam “Criswell” Hart <@Sam_Hart> AIM, Yahoo!:
Homepage: < http://www.geekcomix.com/snh/ >
PGP Info: < http://www.geekcomix.com/snh/contact/ >
Advogato: < http://advogato.org/person/criswell/ >

Is it done yet?

How about now?On Tue, Mar 27, 2001 at 09:07:19AM -0800, Sam Lantinga wrote:

I have an architecture in mind, and I want to get a rough implementation
down so that people can play with it and work on it at that point.
I’m not going to have any time to work on it for the next two months,
so please don’t pester me until then. :slight_smile:


Martin

Bother, said Pooh as he unloaded his Aries Predator on Piglet.