glSDL on an existing game

David Olofson wrote:

What I had in mind is that, maybe there is some trick to fallback
to another driver, or something like this…

That would be your glscale backend, I guess.

Well, no. I had the thought that we could just fallback to the
underlying backend somehow.

I think glscale has some different functionality, so it shouldn’t be
mixed with another backend.

However, I’m not sure how uploading the whole OpenGL frame can be
faster than pure s/w rendering.

I can think of two cases:

  1. An SMP machine and an OpenGL driver that can do
    async. texture uploading in a separate thread.

Hum, yup that’s true. But then SDL has ASYNCBLIT to speed up s/w
rendering :slight_smile:

  1. OpenGL drivers that implement DMA transfers, so
    that texture uploading is asynchronous and/or
    faster than CPU driven uploading.

Both cases imply that direct s/w rendering would still update most or
all of the screen every frame, as is the case in many scrolling games

  • and probably quite a few games really should use some form of
    “smart updating”. (Study my Fixed Rate Pig example and you’ll realize
    why people don’t do it unless they really have to…)

glscale v2 does partial updates, btw :slight_smile:
But that’s of little use when you have a full screen scrolling. And as I
always tend to think about the whorst case…

Unfortunately, I’m afraid both cases are rather unusual on the
platforms that would need them the most, such as Linux. :-/

Well, in the “standard” form of the OpenGL commands, what the driver
uploads has to be in the state it was when the command was emitted, so
deferred uploading is not possible, which makes most drivers block
before the command returns.

More generally, having deferred uploading means the driver has some way
to lock the data, which quickly turns programs into a mess with locks
everywhere (the mechanism for dma transfers in OpenGL with nvidia
extensions is particularly braindead, with the “fence” system, and
doesn’t always gets you higher performance, and even lowers performance
in fillrate-limited situations).

Stephane

David Olofson wrote:

Right. Speaking of which, the SDL_GLSDL flag shouldn’t really be a
flag, but rather use the environment variable based backend selection
API. Not sure if we really should do it that way, but it seems
logically correct to me…

Using environment variables is a suboptimal method for the program to
communicate with SDL and an even more suboptimal way for the end user to
communicate with SDL, especially for Windows and Macintosh users who aren’t
used to messing with environment variables. Yes, using an environment
variable would be consistent how other SDL backends work. No, it’s not the
right thing to do.–
Rainer Deyke - rainerd at eldwood.com - http://eldwood.com