Sdl_noshadow

Well, I might be misunderstanding how SDL works, but…

AFAIK, with SDL, if you get a software surface, all blitting will be
software, even if you never need to lock the screen surface.
SDL_FlipSurface() and SDL_UpdateRects are ways of copying this
"shadow buffer" into VRAM for display.

If this was not done, SDL would have to copy the video buffer back to
system RAM, or somehow reconstruct the current state of the video surface
in the software shadow buffer first, if you lock the screen surface. Now,
it can just give you a pointer to the current software back buffer, as
that’s where all blitting is targetted anyway. (glSDL has the same
problem, but obviously has to go the other way to make any sense at all.
Not much point in using OpenGL if you never actually blit surfaces with
it anyway!)

So, if there was a way for SDL applications to tell SDL that they’re not
going to lock the screen surface for direct software rendering, there
would be some alternative ways of implementing some of the targets that
now use software shadow buffers + software blitting. (And of course, such
a flag would make it very easy to tell which applications will run well
with glSDL - but that’s not the why I’m suggesting it.)

Any target that has a way of storing textures/images/pixmaps/surfaces
in VRAM and blitting them with acceleration could be utilized much better
if SDL wasn’t forced to maintain the software shadow buffer.

Of course, alpha blending on targets that can’t accelerate it would mean
trouble - emulating it in software would be harder, as there’s no
up-to-date shadow buffer to blend with.

Then again, alpha blending already does mean trouble if you accidentally
do it to a VRAM screen surface!

What I’m suggesting is a flag ‘SDL_NOSHADOW’ or similar to
SDL_SetVideoMode(), that tells the rendering backends that they may drop
the support for locking and manipulating the screen surface, if that
helps blitting performance.

Obviously, for glSDL (which is meant to become an OpenGL backend for
SDL), it would avoid the dreaded kludges required to “lock” the OpenGL
display buffer. For DirectX/windowed mode, Win32 GDI, X11, and probably
some other targets I know less about, it would mean that an approach
similar to that of glSDL could be used instead of shadow buffer +
software rendering. (That is, surfaces can be transferred to the driver
and blitted using h/w acceleration.)

Interesting, or should I just hack more and think less? :wink:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -’