Hardware-accelerated memory-to-screen blitting in X11 / OpenGL?

Hi,

SDL supports hardware accelerated blitting of ``normal’’ surfaces to the
screen surface. However, this mode is not supported for X11 (except for
some cards when using DGA). In fact, the only backend hardware accelerated
memory-to-screen blitting is well supported seems to be the DirectX5 backend
for Windows.

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want to
perform the complete rendering into a surface in the main memory and then
blit parts of the surface to the graphics memory using DMA. Think of a
big frame buffer where some content is changed and only the changed part
of the framebuffer should be blitted to the graphics memory (to prevent
full screen updates).

I still had a look at the tests/testgl.c example but that example deals
with a static surface/texture.

If this is not possible, are there other ways to achieve the same behaviour
for X11? What about the Xrender extension for X11?

Thanks in advance,

Frank–

Dept. of Computer Science, Dresden University of Technology, Germany

http://os.inf.tu-dresden.de/~fm3

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20060424/f01065a0/attachment.pgp

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want to
perform the complete rendering into a surface in the main memory and then
blit parts of the surface to the graphics memory using DMA. Think of a
big frame buffer where some content is changed and only the changed part
of the framebuffer should be blitted to the graphics memory (to prevent
full screen updates).

Video cards aren’t really optimized for this…with OpenGL, you’ll want
to put all your textures (bitmaps) into video memory and create the
scene from there every frame. This is very very VERY fast, so long as
you don’t need per-pixel access to the scene. If you want to put stuff
from a memory buffer to the screen every frame, expect slowdowns in any
rendering API. Modern hardware wants you to put everything to the video
card once and then let the card work with it every frame.

That being said, you can render something as complex as UT2004’s
software renderer in a memory buffer and blit it to the screen using
regular X11 from SDL…and still get 50-70 fps on high-end desktop
machines, so less CPU intensive games should do even better. But if you
can avoid doing this in system memory, framerate on 2D games becomes a
complete non-issue.

–ryan.

Yes, I’m aware of this. Unfortunately, I need per-pixel access and I don’t
speek about developing games here. Blitting performance from memory to
screen is essential for me…

FrankOn Monday 24 April 2006 12:05, Ryan C. Gordon wrote:

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want
to perform the complete rendering into a surface in the main memory and
then blit parts of the surface to the graphics memory using DMA. Think
of a big frame buffer where some content is changed and only the changed
part of the framebuffer should be blitted to the graphics memory (to
prevent full screen updates).

Video cards aren’t really optimized for this…with OpenGL, you’ll want
to put all your textures (bitmaps) into video memory and create the
scene from there every frame. This is very very VERY fast, so long as
you don’t need per-pixel access to the scene. If you want to put stuff
from a memory buffer to the screen every frame, expect slowdowns in any
rendering API. Modern hardware wants you to put everything to the video
card once and then let the card work with it every frame.

Dept. of Computer Science, Dresden University of Technology, Germany

http://os.inf.tu-dresden.de/~fm3

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20060424/1109ae91/attachment.pgp

Frank Mehnert wrote:

Hi,

SDL supports hardware accelerated blitting of ``normal’’ surfaces to the
screen surface. However, this mode is not supported for X11 (except for
some cards when using DGA). In fact, the only backend hardware accelerated
memory-to-screen blitting is well supported seems to be the DirectX5 backend
for Windows.

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want to
perform the complete rendering into a surface in the main memory and then
blit parts of the surface to the graphics memory using DMA. Think of a
big frame buffer where some content is changed and only the changed part
of the framebuffer should be blitted to the graphics memory (to prevent
full screen updates).

I still had a look at the tests/testgl.c example but that example deals
with a static surface/texture.

If this is not possible, are there other ways to achieve the same behaviour
for X11? What about the Xrender extension for X11?

You should look at the glscale backend :
http://icps.u-strasbg.fr/~marchesin/sdl/sdl_glscale.patch

If the OpenGL implementation supports DMA for texture uploads (which
almost all drivers do) it will accelerate upload of screen portions, and
try to avoid format conversions when possible. Plus as a nice bonus, you
get free bilinear scaling :slight_smile:

Stephane

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want
to perform the complete rendering into a surface in the main memory and
then blit parts of the surface to the graphics memory using DMA. Think
of a big frame buffer where some content is changed and only the changed
part of the framebuffer should be blitted to the graphics memory (to
prevent full screen updates).

Video cards aren’t really optimized for this…with OpenGL, you’ll want
to put all your textures (bitmaps) into video memory and create the
scene from there every frame. This is very very VERY fast, so long as
you don’t need per-pixel access to the scene. If you want to put stuff
from a memory buffer to the screen every frame, expect slowdowns in any
rendering API. Modern hardware wants you to put everything to the video
card once and then let the card work with it every frame.

Yes, I’m aware of this. Unfortunately, I need per-pixel access and I don’t
speek about developing games here. Blitting performance from memory to
screen is essential for me…

Have you tested how fast you can upload a texture using OpenGL? You
might be surprised at how quickly you can modify a texture in main
memory, upload it to the video card, and then draw the texture to the
screen.

What are you doing that requires per pixel access? Maybe we can help you
find ways to avoid it.

	Bob PendletonOn Mon, 2006-04-24 at 12:55 +0200, Frank Mehnert wrote:

On Monday 24 April 2006 12:05, Ryan C. Gordon wrote:

Frank


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Hi Ryan, Bob and Stephane,

thank you very much for your response.

Frank Mehnert wrote:

SDL supports hardware accelerated blitting of ``normal’’ surfaces to the
screen surface. However, this mode is not supported for X11 (except for
some cards when using DGA). In fact, the only backend hardware
accelerated memory-to-screen blitting is well supported seems to be the
DirectX5 backend for Windows.

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want
to perform the complete rendering into a surface in the main memory and
then blit parts of the surface to the graphics memory using DMA. Think
of a big frame buffer where some content is changed and only the changed
part of the framebuffer should be blitted to the graphics memory (to
prevent full screen updates).

I still had a look at the tests/testgl.c example but that example deals
with a static surface/texture.

If this is not possible, are there other ways to achieve the same
behaviour for X11? What about the Xrender extension for X11?

You should look at the glscale backend :
http://icps.u-strasbg.fr/~marchesin/sdl/sdl_glscale.patch

Thank you. I had a look on this. Since I don’t want to patch the libSDL,
I took the code on how to create an OpenGL surface and how to upload an
OpenGL sub-surfaces to the screen.

If the OpenGL implementation supports DMA for texture uploads (which
almost all drivers do) it will accelerate upload of screen portions, and
try to avoid format conversions when possible. Plus as a nice bonus, you
get free bilinear scaling :slight_smile:

I did already know that OpenGL performance depends on a properly working
DRI (glxinfo, “direct rendering:” must say “Yes”). What I did not know is
that blitting with SDL is much faster if DRI works well than if DRI is
disabled!

My understanding is that on X11, the SDL screen surface is always a memory
buffer (therefore a software surface). You don’t get direct access to the
window contents since this would allow you to draw outside the window.
Blitting from memory to memory is much faster that blitting from memory
to screen (graphics memory).

I normally do

/* set SDL video mode */
mScreen = SDL_SetVideoMode(width, height, 0, sdlFlags);
  .
  .
  .

/* blit the virtual frame buffer of the application (mSurfVRAM) to the
 * screen surface (mScreen) */
SDL_BlitSurface(mSurfVRAM, &rect, mScreen, &rect);

/* notify X11 that something has changed */
SDL_UpdateRect(mScreen, rect.x, rect.y, rect.w, rect.h);

The SDL_UpdateRect action is necessary to get X11 informed that something
has changed in the screen surface. It seems that this function is able
to use DRI if enabled.

FrankOn Monday 24 April 2006 15:20, Stephane Marchesin wrote:

Dept. of Computer Science, Dresden University of Technology, Germany

http://os.inf.tu-dresden.de/~fm3

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20060502/efe5b43e/attachment.pgp

Frank Mehnert wrote:

Hi Ryan, Bob and Stephane,

Hi, and sorry for the delay

thank you very much for your response.

Frank Mehnert wrote:

SDL supports hardware accelerated blitting of ``normal’’ surfaces to the
screen surface. However, this mode is not supported for X11 (except for
some cards when using DGA). In fact, the only backend hardware
accelerated memory-to-screen blitting is well supported seems to be the
DirectX5 backend for Windows.

Is it possible to achive the same functionality using SDL together with
OpenGL? Note: I do not want to use offscreen-rendering. Rather I want
to perform the complete rendering into a surface in the main memory and
then blit parts of the surface to the graphics memory using DMA. Think
of a big frame buffer where some content is changed and only the changed
part of the framebuffer should be blitted to the graphics memory (to
prevent full screen updates).

I still had a look at the tests/testgl.c example but that example deals
with a static surface/texture.

If this is not possible, are there other ways to achieve the same
behaviour for X11? What about the Xrender extension for X11?

You should look at the glscale backend :
http://icps.u-strasbg.fr/~marchesin/sdl/sdl_glscale.patch

Thank you. I had a look on this. Since I don’t want to patch the libSDL,
I took the code on how to create an OpenGL surface and how to upload an
OpenGL sub-surfaces to the screen.

If the OpenGL implementation supports DMA for texture uploads (which
almost all drivers do) it will accelerate upload of screen portions, and
try to avoid format conversions when possible. Plus as a nice bonus, you
get free bilinear scaling :slight_smile:

I did already know that OpenGL performance depends on a properly working
DRI (glxinfo, “direct rendering:” must say “Yes”). What I did not know is
that blitting with SDL is much faster if DRI works well than if DRI is
disabled!

I didn’t say that. What I said basically, is that hardware accelerated
OpenGL scaling can be about as fast as standard 2D. Also, if you don’t
use OpenGL, direct rendering or not should not make any difference.

My understanding is that on X11, the SDL screen surface is always a memory
buffer (therefore a software surface). You don’t get direct access to the
window contents since this would allow you to draw outside the window.
Blitting from memory to memory is much faster that blitting from memory
to screen (graphics memory).

Yes.

I normally do

/* set SDL video mode */
mScreen = SDL_SetVideoMode(width, height, 0, sdlFlags);
  .
  .
  .

/* blit the virtual frame buffer of the application (mSurfVRAM) to the
 * screen surface (mScreen) */
SDL_BlitSurface(mSurfVRAM, &rect, mScreen, &rect);

As you said, with the X11 backend, the screen is a software surface. So
you should draw directly to it, it’s faster.
You still should call SDL_UpdateRect(s), though.

/* notify X11 that something has changed */
SDL_UpdateRect(mScreen, rect.x, rect.y, rect.w, rect.h);

The SDL_UpdateRect action is necessary to get X11 informed that something
has changed in the screen surface. It seems that this function is able
to use DRI if enabled.

With a single buffer, you always have to use that function, and it often
provides a nice speed improvement if you send a rect as small as possible.

Stephane> On Monday 24 April 2006 15:20, Stephane Marchesin wrote: