Still thinking about ways to bridge the gap between Direct3D and
OpenGL (and software rendering) without making a complete mess of the
API and/or the backend implementations.
Considering the semantics of SDL_BlitSurface() and SDL_RenderCopy(),
it seems logical to handle quads pretty much like OpenGL, with
explicit texture coordinates. That is, you specify four source
vertices (“texcoords”) and four destination coordinates;
int SDL_RenderCopyQuad(
SDL_TextureID textureID,
int srcX0, int srcY0,
int srcX1, int srcY1,
int srcX2, int srcY2,
int srcX3, int srcY3,
int dstX0, int dstY0,
int dstX1, int dstY1,
int dstX2, int dstY2,
int dstX3, int dstY3,
int blendMode, int scaleMode);
(Eww… Too many arguments for my taste - but you get the idea.) This
preserves the current logic that you can blit from an area inside a
texture; not just the entire texture. I also think this is the most
useful way of doing it.
A simpler alternative would be the basic “stretch the texture so it’s
corners meet the respective vertices”. That is, texture coordinates
are implicitly fixed to the corners of the texture. All src*n
arguments removed from the prototype above, that is.
However, I think this limits the usefullness of this feature too much,
in relation to the extra work of doing it “properly”. (I know there
are issues with explicit texture coords - more on that below.)
Anyone here who have tried doing serious stuff using an API where you
cannot specify texture coordinates explicitly? Wasn’t the old Flash
2D stuff like that…? (People have made 3D games with that.) How
severe is this restriction in real life applications?
Anyway, with implicit texture coordinates, one does not have to worry
about what happens if you run off the edge of the texture, because
that will never happen! With explicit texture coordinates, it’s
perfectly possible to end up with a final rendering area with eight
sides. (Or worse, if coordinates are allowed to be twisted, ie
butterflies instead of quads.)
Simple solution: Just don’t allow off-texture texcoords, and don’t
allow “twisted” (butterfly shaped, non-convex) quads in texture
space. Clamp, fail or whatever if the application tries these things.
That way, implementations can make the assumptions that the
destination vertices define a zone of pixels that should all be
affected, and that there is valid texture data for every pixel
rendered.
Next problem; implementation: Rendering polygons (even triangles) is
not quite as straightforward as rendering non rotated rectangles. No
rocket science though, and it’s been done countless times before. And
of course, OpenGL and Direct3D will do this automatically - probably
even in hardware.
However, what happens when blit plugins (as suggested by the 1.3 TODO)
are thrown in the mix? Does every blit plugin have to implement this
logic? Does every blit plugin have to implement both this and the
plain rectangular blit call?
How about a call that renders into a trapezoid shaped destination
area? (That is, an SDL_Rect + bottom left and bottom right offsets,
or something like that.) Not much harder to handle than a plain
rectangle, but it’s sufficient for rendering anything from triangles
through complex polygons.
Oh, and this also makes it more reasonable to support off-texture
texcoords and/or non-convex quads if desired, because we only need
one central polygon splitter implementation to deal with such things.
(One might have to use this with OpenGL and Direct3D as well, for
concistent behavior across backends.)
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’