Just thinkin’ sum’…
I can see a few problems with OpenGL and a “transparent” 2D
rendering API, that would directly lend itself to non-accelerated
rendering.
* Texture size constraints
* Cannot blit from anywhere to anywhere even in VRAM
* Alpha blending, color keying etc - different methods
* Pixel effects - sync + access overhead problems
* CPU cannot read from VRAM w/o serious performance impact
As to blitting, cards with colorkeying problems (I’ve heard some
"cards" are broken - but then it could be a Direct3D driver issue -
can’t remember exactly what that was about), different ways to do
alpha (some cards not being able to use a single alpha value for an
entire RGB surface?) and so on; this could all be solved by simply
requiring applications to “uppload” or “convert” the data into an
internal format that suits the current implementations - somewhat like
automatically convertion to the screen pixel format, and then not
dealing with pixels directly.
Texture sizes are worse, but as long as we stick to always blitting
entire sprites, tiles etc, this is no problem - uploading the data to
the engine will take care of this.
The last two issues, pixel effects and CPU reads, can probably be
seen as closely related in most cases - you have to read some data to
blend the pixels in, one way or another. (Either because you need to
do some kind of blending, or just because the bus word size doesn’t
match the size of the single pixel you’re writing, thus causing the
CPU or bus logic to automatically do a read-modify-write operation.)
What I have in mind for this isn’t exactly compatible with software
rendering methods, but could perhaps be nicer than no consensus at
all: Implement pixel renderers as callbacks (instead of textures),
somewhat similar to sprites. The callbacks should expect two raw
surface pointers (screen pixel format); one for reading and one for
writing.
In software mode, the read pointer would usually be in system RAM,
and the write pointer in VRAM (if possible), while in OpenGL mode,
both would (probably) be in the AGP aperture. This will vary between
hardware and drivers, but the general idea is that SDL gets to decide.
Procedural tiles and sprites would be like procedural textures,
although it might be a good idea for games to be able to disable RGBA
data with software rendering. There should be a sufficient buffering
system, so that the GPU and CPU don’t have to actively sync a few
times per frame, as that’s usually a serious performance killer.
I wish I had the time to just hack away on all that…
//David
.- M u C o S -------------------------. .- David Olofson --------.
| A Free/Open Source | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
| for | | Open Source Advocate |
| Professional and Consumer | | Singer |
| Multimedia | | Songwriter |
-----> http://www.linuxdj.com/mucos -'
—> david at linuxdj.com -'On Thu, 16 Nov 2000, Mattias Engdeg?rd wrote:
I noticed lots of games are using OGL hardware acceleration for 2D
games (for obvious benefits). I was wondering if it would be a good
idea to have SDL transparently use OGL for 2D surfaces if it is present
on the current system, and then fallback to software if OGL isn’t
available.
I proposed using opengl for 2d rendering on #sdl, and the response was
mixed. It would probably not be useful as a generic 2d rendering
target since direct pixel access is likely to be very slow, but many
alpha operations could be nicely accelerated. No doubt the devil is in
the details (surfaces have to be split up in texture-sized chunks and
tiled, and many other problems), but it could be useful as a target
for games written with its limitations in mind.