Sharing a surface

Is there a way for 2 applications (distinct processes) to share a surface. So
can I do something like the following:

App 1 creates a surface and renders to it.
App 2 creates a surface and renders to it.
App 3 then takes both of the above surface and blits them together with a
blend to the main display, composites the apps together.

This means App 3 needs to reference the surfaces in the other apps. These
would be HW surfaces.

Thanks,
Brian

You can try allocating shared memory for the pixel buffer, then
generate a SDL_Surface from it by calling SDL_CreateRGBSurfaceFrom.
It isn’t a HW surface, but you shouldn’t use HW surfaces anyway when
blending.

Tim

Thanks for the info.

Why not use HW surfaces for blending? If the hardware can support a blit
with alpha then this would be accelerated. If the blit does not support
alpha then I see why you would not want to use a HW surface with a blend. I
suppose it depends on the particular implementation on the backend correct?
I will have to look to see if HW alpha is a capability you can query.

Thanks,

Brian

“Tim Goya” wrote in message
news:ff70c5070711170842q6ede06ev175862f647ca7869 at mail.gmail.com…> You can try allocating shared memory for the pixel buffer, then

generate a SDL_Surface from it by calling SDL_CreateRGBSurfaceFrom.
It isn’t a HW surface, but you shouldn’t use HW surfaces anyway when
blending.

Tim

Thanks for the info.

Why not use HW surfaces for blending?

Because it’s usually very slow unless it’s accelerated. Reading from
VRAM is an insane number of times slower than reading from system
memory on most modern hardware.

Also note that to blend, you have to read the target surface av
well, so if you’re dealing with much more than a blended mouse
pointer, you’re better of doing all rendering in a software shadow
buffer and then blitting that to the display surface.

(To do this with proper page flipping where available, DO NOT ask SDL
for a software display surface. Instead, ask for a double buffered
hardware display surface, and then set up your own software shadow
surface.)

If the hardware can support a blit with alpha then this would be
accelerated. If the blit does not support alpha then I see why you
would not want to use a HW surface with a blend. I suppose it
depends on the particular implementation on the backend correct?

Yes - and the only SDL 1.2 backend that supports it is the one for
DirectFB. (Accelerated “add-on” for Linux fbdev, that is.)

Other than that, you need the glSDL backend patch for SDL 1.2, the
glSDL application side wrapper (the original implementation), or you
need to use SDL 1.3. The last one has the advantage of supporting
Direct3D, whereas the glSDL variants need OpenGL to be of much use.

I will have to look to see if HW alpha is a capability you can
query.

Not sure about SDL 1.2, but I’m pretty sure you can in SDL 1.3. IIRC,
it has an “accelerated” flag for every blit feature.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Saturday 17 November 2007, Brian Edmond wrote: