Tile based scrolling and OpenGL

Hello.

In a 2D scrolling game, the background (assume only one layer, since
parallax is not relevant to the question) is usually drawn from tiles.
The tiles are preloaded onto a surface and then selectively blitted to
the screen surface. The selection of the tiles is based on a matrix that
indexes the tiles, indicating the combination needed to compose each
scene. As the game scrolls, different tiles are requested, but the
blitted area is always of the same size as the screen (or a little more,
if you rely on clipping).

Now consider reproducing the same as above using OpenGL (which I am
completely new to). How would it be done? As I understand, a geometry
would have to be created (a single face, perpendicular to the player’s
view) and the composition of tiles would act as the geometry’s texture.
But now I have some questions: would the geometry be just as wide as the
screen with its texture being updated each frame or would it be as wide
as necessary to hold the complete image defined by the matrix and then
only the geometry needed to be moved?

I am sorry if my question sounds dumb, but I would like to have a better
idea of how this should be done and why. I have a feeling that the first
option is better than the second, but am not sure. Or maybe the most
optimal way has not even been mentioned here.

Thank you in advance,–
Ney Andr? de Mello Zunino

hi!

first of all, this question is not dumb, but off-topic
for this forum aswell. for all questions about opengl,
you might the forum at www.opengl.org. it is very
good, and you get very good answers aswell (mostly).
anyway, to your question: i don’t know, if i
understood your question completely. are you talking
about texture-tiles or about geometric mapinformation?
for the latter case, you need to store the whole map
in an 2d or 3d array (say m[max_x-1][max_y-1]) and
display your view:

x_pos= ;
y_pos= ;
for (y=0; y<=view_y; y++) {
for (x=0; x<=view_x; x++) {
if (/* checkforclipping */)
drawTile(m[x_pos+view_x][y_pos+view_y];
}
}

i apologize, if you knew this before, but it was the
easiest way to explain :wink:

for the other case, well you may draw parts of one
texture. so, if you want to draw one texture for
-let’s say - a map of 100x100, then you need to put
the part [0…1/100][0…1/100] of the texture on the
first tile ([0][0]).

drawTile(x,y){
glVertex(x,y);
glTexCoord2f(0,0);
glVertex(x+1,y);
glTexCoord2f(1/100,0);
glVertex(x+1,y+1);
glTexCoord2f(1/100,1/100);
glVertex(x,y+1);
glTexCoord2f(0,1/100);
}

if you took 1 instead of 1/100, you’d map one texture
to exactly one tile.
i believe, there’s some automapping function (glu?),
but i`m quite new, aswell.
indeed, i’m working on a terrain engine aswell, and
i’d gladly exchange experiences with you (or with
anyone else).
btw, all code above is not tested or anything else…

so long,
Tolga.

Ney Andr? de Mello Zunino wrote (Donnerstag, 24.
Januar 2002 09:02) :

Hello.

In a 2D scrolling game, the background (assume only
one layer, since
parallax is not relevant to the question) is usually
drawn from tiles.
The tiles are preloaded onto a surface and then
selectively blitted to
the screen surface. The selection of the tiles is
based on a matrix that
indexes the tiles, indicating the combination needed
to compose each
scene. As the game scrolls, different tiles are
requested, but the
blitted area is always of the same size as the
screen (or a little more,
if you rely on clipping).

Now consider reproducing the same as above using
OpenGL (which I am
completely new to). How would it be done? As I
understand, a geometry
would have to be created (a single face,
perpendicular to the player’s
view) and the composition of tiles would act as the
geometry’s texture.
But now I have some questions: would the geometry be
just as wide as the
screen with its texture being updated each frame or
would it be as wide
as necessary to hold the complete image defined by
the matrix and then
only the geometry needed to be moved?

I am sorry if my question sounds dumb, but I would
like to have a better
idea of how this should be done and why. I have a
feeling that the first
option is better than the second, but am not sure.
Or maybe the most> optimal way has not even been mentioned here.

Thank you in advance,


Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

[…accurate description of how most tiled 2D engines work…]

As I understand, a geometry
would have to be created (a single face, perpendicular to the player’s
view) and the composition of tiles would act as the geometry’s texture.
But now I have some questions: would the geometry be just as wide as
the screen with its texture being updated each frame or would it be as
wide as necessary to hold the complete image defined by the matrix and
then only the geometry needed to be moved?

Neither. You need to use other methods, because of the limitations of
current 3D hardware:

On some platforms, OpenGL drivers are generally incapable of using
busmaster DMA for texture transfers, so you can’t run the background as a
full frame rate procedural texture and get a useful frame rate. (It’s
slower than software rendering directly into VRAM.)

As to rendering the whole map into a texture (if that’s what you
mean…?), that would require a very large texture - while few cards
can handle more than 2048x2048. (Most older cards will do 1024x1024 at
most, I think.) Obviously, texture size is an issue even with the
procedural, screen sized texture approach - even in 320x240! (Some cards
can’t handle more than 256x256…)

The most straightforward method is in fact very simple: Just use OpenGL
as a 2D rendering API! Render each tile as a quad (or if you’re going to
use vertex blending effects, two triangles, as you can’t know for sure
how a quad is split up).

For texturing, there are basicaly two methods:

Simple, but slow on some drivers and/or cards:
	* Store each tile image in one texture.
	For each tile:
		* Bind the texture containing the graphics
		  for the tile.
		* Set the texture coords to (0,0) (1,0)
		  (1,1) (0,1) (assuming top-left, top-right,
		  bottom-right, bottom-left vertex order).

Slightly more complex:
	* Pack as many tiles as possible into each texture.
	  (Note that you may have to use multiple "palette
	  textures" on some cards, if you need more than
	  one 256x256 texture to fit all tiles!)
	For each tile:
		* Make sure the texture containing the tile
		  image is selected. (NOP most of the time,
		  if you put "related" tiles on the same
		  texture.)
		* Calculate texture coords to address the
		  specific tile image inside the "palette"
		  texture.

(BTW, glSDL currently uses the first method, except for fonts, which are
treated as single surfaces in the traditional SFont way. Not that slow,
it seems…)

The most serious issue with this method is that it’s a bit complicated
and somewhat expensive to get it to work with sub-pixel accurate smooth
scrolling. Even pretty old cards will do subpixel accuracy for the
texturing, but cards without h/w antiaiazing will just round the screen
coordinates of the vertices to the nearest integer pixel. That is, you
get hard and inaccurate edges, even if the textures are filtered.

To get around that, you need to use either RGBA textures, or multiple
quads per tile + vertex alpha blending, effectively implementing
antialiazing on the application level.

A much more powerful, but also more complicated method, is to use a
hybrid of the two methods you suggested, and my method;

* Set up a small map, just one tile larger than the screen,
  with large tiles. (Say, 64x64..256x256, depending on screen
  resolution.) I'll refer to this map as the "virtual screen".

* Preallocate a pool of tile textures for the virtual screen.
  There should be enough of them that you can fill the screen
  with unique tiles, and have enough spare tiles to do
  tile-by-tile procuderal updates of at least a full row and
  a full column of the screen.

* "Operate" the virtual screen as a tiled hardware scrolling
  video device. (Or rather a "sprites only" display, if
  you're into arcade machine hardware. IIRC, the Nintendo
  GameBoy and some game consoles use similar approaches.)
  That is, smooth scroll by adjusting the tile quad vertex
  coords, and coarse scroll by adjusting the (integer)
  virtual screen map offset.

* Whenever you start getting too close to the edges of the
  virtual screen map, start a "background thread" that
  renders new areas from the game map into a set of unused
  virtual screen tiles.

* Upon hitting an edge of the virtual screen map, rebuild it
  with a new map offset, that will put the screen inside the
  map again. (Obviously, the new virtual screen tiles *must*
  be ready before this happens!)

The second last point is by far the most complicated part to get right,
at least in a game that allows the player to instantly change the
scrolling direction at any time. Also note that even if you’re doing
quite heavy s/w rendering (no need to stick to a simple tiled map!),
texture transfer speed will quite likely be the limiting factor, so it’s
important to upload as few pixels per frame as possible. Make use of the
time from that the “background thread” is started, until it’s time to
rebuild the map - but never be late! heh

Of course, the same issues apply when it comes to sub-pixel accurate
scrolling; tile edges must be taken care of.

However, in this case, there are some shortcuts! As the virtual screen
tiles are specifically rendered to be combined in only one way, there
won’t be any tile matching issues that can screw up texture filtering
around the tile edges. All you need to do is provide a border of extra
pixels from surrounding tiles, and the texture filtering will work
correctly.

(Of course, the second approach can be useful without OpenGL as well, for
games that need more map design flexibility than frame-by-frame real time
rendering allows.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 24 January 2002 09:02, Ney Andr? de Mello Zunino wrote:

hello!

david, the last possibility is astounding! i’ve been
stupid all the time, since i was searching all the
time for a proper solution!
do you have, or at least know a good (and good
commented) implementation of this algorithm?
also, how to create such terrain textures with sdl?

perhaps, we could discuss this thread outside the
libsdl-list, because it’s maybe off-topic. what do the
others think?

tnx in advance for all your efforts,
Tolga Dalman.

David Olofson wrote (Freitag, 25. Januar 2002 05:22):

[…accurate description of how most tiled 2D
engines work…]

As I understand, a geometry
would have to be created (a single face,
perpendicular to the player’s

view) and the composition of tiles would act as
the geometry’s texture.

But now I have some questions: would the geometry
be just as wide as

the screen with its texture being updated each
frame or would it be as

wide as necessary to hold the complete image
defined by the matrix and

then only the geometry needed to be moved?

Neither. You need to use other methods, because of
the limitations of
current 3D hardware:

On some platforms, OpenGL drivers are generally
incapable of using
busmaster DMA for texture transfers, so you can’t
run the background as a
full frame rate procedural texture and get a useful
frame rate. (It’s
slower than software rendering directly into VRAM.)

As to rendering the whole map into a texture (if
that’s what you
mean…?), that would require a very large texture

  • while few cards

can handle more than 2048x2048. (Most older cards
will do 1024x1024 at
most, I think.) Obviously, texture size is an issue
even with the
procedural, screen sized texture approach - even in
320x240! (Some cards
can’t handle more than 256x256…)

The most straightforward method is in fact very
simple: Just use OpenGL
as a 2D rendering API! Render each tile as a quad
(or if you’re going to
use vertex blending effects, two triangles, as you
can’t know for sure
how a quad is split up).

For texturing, there are basicaly two methods:

Simple, but slow on some drivers and/or cards:
* Store each tile image in one texture.
For each tile:
* Bind the texture containing the graphics
for the tile.
* Set the texture coords to (0,0) (1,0)
(1,1) (0,1) (assuming top-left, top-right,
bottom-right, bottom-left vertex order).

Slightly more complex:
* Pack as many tiles as possible into each
texture.
(Note that you may have to use multiple “palette
textures” on some cards, if you need more than
one 256x256 texture to fit all tiles!)
For each tile:
* Make sure the texture containing the tile
image is selected. (NOP most of the time,
if you put “related” tiles on the same
texture.)
* Calculate texture coords to address the
specific tile image inside the "palette"
texture.

(BTW, glSDL currently uses the first method, except
for fonts, which are
treated as single surfaces in the traditional SFont
way. Not that slow,
it seems…)

The most serious issue with this method is that it’s
a bit complicated
and somewhat expensive to get it to work with
sub-pixel accurate smooth
scrolling. Even pretty old cards will do subpixel
accuracy for the
texturing, but cards without h/w antiaiazing will
just round the screen
coordinates of the vertices to the nearest integer
pixel. That is, you
get hard and inaccurate edges, even if the textures
are filtered.

To get around that, you need to use either RGBA
textures, or multiple
quads per tile + vertex alpha blending, effectively
implementing
antialiazing on the application level.

A much more powerful, but also more complicated
method, is to use a
hybrid of the two methods you suggested, and my
method;

  • Set up a small map, just one tile larger than the
    screen,
    with large tiles. (Say, 64x64…256x256, depending
    on screen
    resolution.) I’ll refer to this map as the
    "virtual screen".

  • Preallocate a pool of tile textures for the
    virtual screen.
    There should be enough of them that you can fill
    the screen
    with unique tiles, and have enough spare tiles to
    do
    tile-by-tile procuderal updates of at least a
    full row and
    a full column of the screen.

  • “Operate” the virtual screen as a tiled hardware
    scrolling
    video device. (Or rather a "sprites only"
    display, if
    you’re into arcade machine hardware. IIRC, the
    Nintendo
    GameBoy and some game consoles use similar
    approaches.)
    That is, smooth scroll by adjusting the tile quad
    vertex
    coords, and coarse scroll by adjusting the
    (integer)
    virtual screen map offset.

  • Whenever you start getting too close to the edges
    of the
    virtual screen map, start a "background thread"
    that
    renders new areas from the game map into a set of
    unused
    virtual screen tiles.

  • Upon hitting an edge of the virtual screen map,
    rebuild it
    with a new map offset, that will put the screen
    inside the
    map again. (Obviously, the new virtual screen
    tiles must
    be ready before this happens!)

The second last point is by far the most complicated
part to get right,
at least in a game that allows the player to
instantly change the
scrolling direction at any time. Also note that even
if you’re doing
quite heavy s/w rendering (no need to stick to a
simple tiled map!),
texture transfer speed will quite likely be the
limiting factor, so it’s
important to upload as few pixels per frame as
possible. Make use of the
time from that the “background thread” is started,
until it’s time to
rebuild the map - but never be late! heh

Of course, the same issues apply when it comes to
sub-pixel accurate
scrolling; tile edges must be taken care of.

However, in this case, there are some shortcuts! As
the virtual screen
tiles are specifically rendered to be combined in
only one way, there
won’t be any tile matching issues that can screw up
texture filtering
around the tile edges. All you need to do is provide
a border of extra
pixels from surrounding tiles, and the texture
filtering will work
correctly.

(Of course, the second approach can be useful
without OpenGL as well, for
games that need more map design flexibility than
frame-by-frame real time
rendering allows.)

//David Olofson — Programmer, Reologica
Instruments AB

.- M A I A
-------------------------------------------------.

| Multimedia Application Integration
Architecture |
| A Free/Open Source Plugin API for Professional
Multimedia |

`---------------------------->
http://www.linuxdj.com/maia -’
.- David Olofson
-------------------------------------------.

| Audio Hacker - Open Source Advocate - Singer -
Songwriter |

`------------------------------------->
http://olofson.net -’> On Thursday 24 January 2002 09:02, Ney Andr? de Mello Zunino wrote:


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

on Donnerstag, 24. Januar 2002 09:02 you wrote:

Hello.

In a 2D scrolling game, the background (assume only one layer, since
parallax is not relevant to the question) is usually drawn from tiles.
The tiles are preloaded onto a surface and then selectively blitted to
the screen surface. The selection of the tiles is based on a matrix that
indexes the tiles, indicating the combination needed to compose each
scene. As the game scrolls, different tiles are requested, but the
blitted area is always of the same size as the screen (or a little more,
if you rely on clipping).

Now consider reproducing the same as above using OpenGL (which I am
completely new to). How would it be done? As I understand, a geometry
would have to be created (a single face, perpendicular to the player’s
view) and the composition of tiles would act as the geometry’s texture.
But now I have some questions: would the geometry be just as wide as the
screen with its texture being updated each frame or would it be as wide
as necessary to hold the complete image defined by the matrix and then
only the geometry needed to be moved?

I am sorry if my question sounds dumb, but I would like to have a better
idea of how this should be done and why. I have a feeling that the first
option is better than the second, but am not sure. Or maybe the most
optimal way has not even been mentioned here.

You first idea is surely the most simple idea - though the maximum texture
size of your OpenGL implementation will limit the size of the complete image.
To my knowledge OpenGL allows the maximum of 2048x2048 pixels for a texture,
if your hardware supports it. Some card may only allow 256x256 or even less.

Your second idea is very slow as you would have to update the complete
texture each time you scroll …

In my project (tilebased-scrolling) I am using four OpenGL quads, each of
them as big as the screen. When scrolling occurs these geometries will be
moved and if one leaves the screen on one side, it will start to enter it
again from the opposite site (top-bottom, left-right and vice verca).
The advantage of this concept is that you will only need to redraw/update
that part of the texture which has just entered the screen.

Furthermore if you OpenGL only support texture sizes lesser than your screen
size you might use the appropriate higher number of quads.> Thank you in advance,

David Olofson wrote:

Neither. You need to use other methods, because of the limitations of
current 3D hardware:

On some platforms, OpenGL drivers are generally incapable of using
busmaster DMA for texture transfers, so you can’t run the background as
a full frame rate procedural texture and get a useful frame rate. (It’s
slower than software rendering directly into VRAM.)

As to rendering the whole map into a texture (if that’s what you
mean…?), that would require a very large texture - while few cards
can handle more than 2048x2048. (Most older cards will do 1024x1024 at
most, I think.) Obviously, texture size is an issue even with the
procedural, screen sized texture approach - even in 320x240! (Some
cards can’t handle more than 256x256…)

First of all, I would like to thank you and the others for your
responses. I apologize for not having replied earlier, but the fact is
that I am quite new to game programming and, specially, to OpenGL. That
means I have had trouble dealing with all the terminology and some of
the techniques described. Nevertheless, I will try to comment on some
parts, to see if I am on the right track. Thanks for your patience.

The most straightforward method is in fact very simple: Just use
OpenGL as a 2D rendering API! Render each tile as a quad (or if
you’re going to use vertex blending effects, two triangles, as you
can’t know for sure how a quad is split up).

Ok, this seems easy to understand. Instead of one big quad covering all
the screen, I have several small quads which will be arranged side by
side. Fine. However, I lost it when you mentioned using vertex blending
effects. Why would quads not do in such scenario?

For texturing, there are basicaly two methods:

Simple, but slow on some drivers and/or cards:
* Store each tile image in one texture.
For each tile:
* Bind the texture containing the graphics
for the tile.
* Set the texture coords to (0,0) (1,0)
(1,1) (0,1) (assuming top-left, top-right,
bottom-right, bottom-left vertex order).

Understood.

Slightly more complex:
* Pack as many tiles as possible into each texture.
(Note that you may have to use multiple “palette
textures” on some cards, if you need more than
one 256x256 texture to fit all tiles!)
For each tile:
* Make sure the texture containing the tile
image is selected. (NOP most of the time,
if you put “related” tiles on the same
texture.)
* Calculate texture coords to address the
specific tile image inside the "palette"
texture.

Ditto.

[…]

The most serious issue with this method is that it’s a bit
complicated and somewhat expensive to get it to work with sub-pixel
accurate smooth scrolling. Even pretty old cards will do subpixel
accuracy for the texturing, but cards without h/w antiaiazing will
just round the screen coordinates of the vertices to the nearest
integer pixel. That is, you get hard and inaccurate edges, even if
the textures are filtered.

Humm, I am not sure I have got this one. First, what do you mean by
subpixel accuracy? I thought that pixels were as fine as you could go
in terms of granularity… Or do you mean that, because of the screen
coordinates rounding, the texture mapping loses precision, resulting in
the inacurrate edges that you mention? I am just guessing here… Would
you be so kind to explain it a bit further?

To get around that, you need to use either RGBA textures, or multiple
quads per tile + vertex alpha blending, effectively implementing
antialiazing on the application level.

Would that not be too slow? I mean, if I had, say, 4 or 5 quads per
tile, would that not mean that I would have 4 or 5 times more
processing per tile? Would that be acceptable?

A much more powerful, but also more complicated method, is to use a
hybrid of the two methods you suggested, and my method;

  • Set up a small map, just one tile larger than the screen,
    with large tiles. (Say, 64x64…256x256, depending on screen
    resolution.) I’ll refer to this map as the “virtual screen”.

Do you mean that, for instance, in a horizontal scrolling setup, with
640x480 resolution and 64x64 tiles, I would need a 704x480 map (i.e. one
extra column of tiles)?

[further description of the advanced technique]

I will skip the advanced technique for now, until I feel more
comfortable with the basic concepts.

Of course, the same issues apply when it comes to sub-pixel accurate
scrolling; tile edges must be taken care of.

However, in this case, there are some shortcuts! As the virtual screen
tiles are specifically rendered to be combined in only one way, there
won’t be any tile matching issues that can screw up texture filtering
around the tile edges. All you need to do is provide a border of extra
pixels from surrounding tiles, and the texture filtering will work
correctly.

Here we are with the subpixel accuracy thing again. What happens at the
tile edges after all? :slight_smile: Why will an extra border solve the problem? I
know you must be thinking: “Quit being lazy and go try it to see for
yourself.” You are right, I should do that, and I will. The problem is
that I am still a little far from getting to that point and believe som
clarifying would be satisfying enough at this point.

Thank you very much,–
Ney Andr? de Mello Zunino

[…]

The most straightforward method is in fact very simple: Just use
OpenGL as a 2D rendering API! Render each tile as a quad (or if
you’re going to use vertex blending effects, two triangles, as you
can’t know for sure how a quad is split up).

Ok, this seems easy to understand. Instead of one big quad covering all
the screen, I have several small quads which will be arranged side by
side. Fine. However, I lost it when you mentioned using vertex blending
effects. Why would quads not do in such scenario?

Well, fist of all, the fundamental reason for the problem: Most cards
accelerate only triangles, so anything with 4+ vertices will have to be
split up into triangles.

Vertex blending effects usually involve setting up different color and
alpha “modulation parameters” for each vertex of a polygon. OpenGL will
interpolate the parameters across the polygon. (If all vertices have the
same color and alpha, there’s no problem, as the interpolation will
effectively be a NOP.)

The problem is that the result may differ slightly depending on how the
card actually renders the polygon. In the case of a quad, there are two
ways to split into triangles - and you can’t know for user which way the
current driver will do it.

So, by explicitly passing triangles, you can’t really avoid the problem,
but at least you can control how the splitting is done.

[…]

The most serious issue with this method is that it’s a bit
complicated and somewhat expensive to get it to work with sub-pixel
accurate smooth scrolling. Even pretty old cards will do subpixel
accuracy for the texturing, but cards without h/w antiaiazing will
just round the screen coordinates of the vertices to the nearest
integer pixel. That is, you get hard and inaccurate edges, even if
the textures are filtered.

Humm, I am not sure I have got this one. First, what do you mean by
subpixel accuracy?

Interpolation of textures to simulate positioning of graphics with higher
accuracy than one pixel.

I thought that pixels were as fine as you could go
in terms of granularity…

They are - if you can dictate the video refresh rate, and design the game
to have suitable scrolling speeds.

However, in real life, that’s more or less futile. You don’t always get
the refresh rate you ask for (if you can ask for one at all!), and
perhaps constant speed scrolling isn’t all that exciting in the long
run…

So, to achieve extremely smooth scrolling and animation, you’ll need to
calculate the scroll and sprite positions for each rendered frame based
on the actual time that frame will be displayed. The closer to the exact
positions you can get the rendered results appear, the smoother the
animation.

Or do you mean that, because of the screen
coordinates rounding, the texture mapping loses precision, resulting in
the inacurrate edges that you mention? I am just guessing here… Would
you be so kind to explain it a bit further?

Well, the problem with the edges is that most 3D cards will render
textures with sub pixel accuracy - but they won’t interpolate or
antialias polygon edges!

That is, if you move a tiled background without any special precautions
taken (and no FSAA) very slowly, the graphics will appear to "float"
smoothly (rather than “jump” one pixel at a time, as it normally does in
2D games) - BUT, the edges will still jump! That kind of ruins the ultra
smooth scrolling effect, obviously…

To get around that, you need to use either RGBA textures, or
multiple quads per tile + vertex alpha blending, effectively
implementing antialiazing on the application level.

Would that not be too slow? I mean, if I had, say, 4 or 5 quads per
tile, would that not mean that I would have 4 or 5 times more
processing per tile? Would that be acceptable?

That depends on you hardware. A hot machine with good drivers can be
capable of pushing thousands of polygons per frame at full frame rate, so
there might no actually be a problem.

Of course, it’s nice if you can get everything work even on low end 3D
accelerators, but at some point you simply have to turn some of the eye
candy off, and lower the rendering quality. (Which may include disabling
"Ultra Smooth Scrolling".)

A much more powerful, but also more complicated method, is to use a
hybrid of the two methods you suggested, and my method;

  • Set up a small map, just one tile larger than the screen,
    with large tiles. (Say, 64x64…256x256, depending on screen
    resolution.) I’ll refer to this map as the “virtual screen”.

Do you mean that, for instance, in a horizontal scrolling setup, with
640x480 resolution and 64x64 tiles, I would need a 704x480 map (i.e.
one extra column of tiles)?

Yep. (If you have some experience with hardware scrolling on the Amiga,
C64 or VGA Mode-X, the extra row and column correspond to the “scroll
border” you’ll need unless you’re going to refresh the edges after every
scroll position change.)

[further description of the advanced technique]

I will skip the advanced technique for now, until I feel more
comfortable with the basic concepts.

Yeah, get started rather than scared off! :wink:

Of course, the same issues apply when it comes to sub-pixel accurate
scrolling; tile edges must be taken care of.

However, in this case, there are some shortcuts! As the virtual
screen tiles are specifically rendered to be combined in only one
way, there won’t be any tile matching issues that can screw up
texture filtering around the tile edges. All you need to do is
provide a border of extra pixels from surrounding tiles, and the
texture filtering will work correctly.

Here we are with the subpixel accuracy thing again. What happens at the
tile edges after all? :slight_smile:

The 3D card will have to grab some extra pixels outside the actual
texture area, for the interpolation. Normally, those pixels will be
black, transparent or whatever - while they should contain graphics
from adjacent tiles to produce proper results!

If you don’t do anything about this, but just pass float vertex or
texture coordinates, you’ll get thin, “flickering” lines in between
tiles, rather than the desired illusion of a solid, contigous image.

Why will an extra border solve the problem?

Because you can fill that border in with pixels from the adjacent tiles,
so that interpolation will “chain” as if you had rendered the whole
screen as one quad.

BTW, you may have noticed that this is not a problem along the line along
with a quad is split. That’s because the line is in the middle of valid
texture data, rather than bordering to the void space outside the
texture. (Or “wrap limit”, if you disable texture clamping.)

I
know you must be thinking: “Quit being lazy and go try it to see for
yourself.” You are right, I should do that, and I will. The problem is
that I am still a little far from getting to that point and believe som
clarifying would be satisfying enough at this point.

Don’t worry! I wouldn’t blame you for going “Huh!? What’s that
flickering?” even if you did try it yourself. 3D accelerators can seem
to do the strangest things, unless you have a solid understanding of
their inner workings. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Tuesday 29 January 2002 05:15, Ney Andr? de Mello Zunino wrote:

This is generally true but you can get OpenGL to anti-alias polygons
with some explicit settings. The only thing is, you cannot use the Z-buffer
in this mode, rather you have to pass polygons in a certain order
(front to back, iirc). This is covered in the OpenGL Programming Guide.
I don’t have the book here with me but I could copy some examples later
if you don’t have the book.

This is usually a worthless mode for 3D graphics, I guess, but it could be
great for 2D!

I’m not sure if it works on all hardware, of course. I’ve only tried it
on Radeon cards.

Hum, maybe that’s exactly what you were talking about though. Sorry if this
is redundant.On Wed, Jan 30, 2002 at 10:58:34PM +0100, David Olofson wrote:

On Tuesday 29 January 2002 05:15, Ney Andr? de Mello Zunino wrote:
[…]

Or do you mean that, because of the screen
coordinates rounding, the texture mapping loses precision, resulting in
the inacurrate edges that you mention? I am just guessing here… Would
you be so kind to explain it a bit further?

Well, the problem with the edges is that most 3D cards will render
textures with sub pixel accuracy - but they won’t interpolate or
antialias polygon edges!

That is, if you move a tiled background without any special precautions
taken (and no FSAA) very slowly, the graphics will appear to "float"
smoothly (rather than “jump” one pixel at a time, as it normally does in
2D games) - BUT, the edges will still jump! That kind of ruins the ultra
smooth scrolling effect, obviously…


Greg V. (hmaon)

[…OpenGL polygon AA…]

This is usually a worthless mode for 3D graphics, I guess, but it could
be great for 2D!

I’m not sure if it works on all hardware, of course. I’ve only tried it
on Radeon cards.

Hum, maybe that’s exactly what you were talking about though. Sorry if
this is redundant.

Well, just like some cards may have FSAA that’s “correct” enough to work
for this, some cards may be able to do it with polygon antialiazing - but
both are features that are still mostly found on high end cards. IMHO,
it’s a way too rare feature to be relied upon for such a critical task.
(It has to work, or you might as well disable subpixel accurate
rendering altogether.) Alternative methods must be supported as well.

(And while we’re discussing wether or not high end OpenGL features should
be used, some still argue that you shouldn’t write a 2D game that can’t
run without OpenGL…)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Thursday 31 January 2002 19:02, Greg Velichansky wrote:

David Olofson wrote:

(And while we’re discussing wether or not high end OpenGL features should
be used, some still argue that you shouldn’t write a 2D game that can’t
run without OpenGL…)

That’s interesting. Last night I was thinking that there was no point
using anything BUT OpenGL for the graphics in a game…

I Think the question is really, what is the least common computer you
want your game to run on? There are still a lot of old computers out
there that can run strictly 2D blitter based games at a reasonable frame
rate that can not run OpenGL based games at an equivalent rate. If you
look at the cleverness needed to get Doom running on a 386/486 you can
see what I mean. A general pupose 3D graphics API just doesn’t work that
well on older and low end computers.

	Bob P.-- 

±-----------------------------------+

  • Bob Pendleton is seeking contract +
  • and consulting work. Find out more +
  • at http://www.jump.net/~bobp +
    ±-----------------------------------+

Well, just like some cards may have FSAA that’s “correct” enough to work
for this, some cards may be able to do it with polygon antialiazing - but
both are features that are still mostly found on high end cards.

Oh, really? Doh. I know it’s been in the OpenGL specs for a while, as I
only have the 1.1 edition of the red book (aka 2nd edition, I think).
FSAA seems to mess up 2D images drawn through a 3D api quite often.
Artifacts like weird lines at tile edges appear. It also blurs text
needlessly, making it hard to read.

IMHO,
it’s a way too rare feature to be relied upon for such a critical task.
(It has to work, or you might as well disable subpixel accurate
rendering altogether.) Alternative methods must be supported as well.

Makes sense. It’s probably worth it to make an api with several back-end
implementations, for newer 3D cards and for platforms without 3D.
The SDL might be such an api. However, there’s no way right now to tell
SDL to do anti-aliasing tricks on blits so you’d need to either have
creative control over the next version of SDL or, much more likely,
write your own new mini api for blits.

On a fast CPU, you could implement nice sub-pixel accurate blitting
with just the 2D api of SDL using alpha blitting. You can blit to an
intermediate buffer with sub-pixel accuracy using the alpha channel for
anti-aliasing at image edges. Then blit the intermediate buffer to the
back-buffer with alpha blending. The intermediate buffer would be a
minimum of 1 pixel larger than the sprite you want to blit, I guess.
This has the advantage of being simple to implement, I think. It
wouldn’t be as fast as one blit routine that would write directly to
your back-buffer.

(And while we’re discussing wether or not high end OpenGL features should
be used, some still argue that you shouldn’t write a 2D game that can’t
run without OpenGL…)

I wouldn’t dream of it. OK, I would dream of it but I wouldn’t actually
do it. It’s a shame to let all that power in new cards go to waste though.On Fri, Feb 01, 2002 at 02:00:18AM +0100, David Olofson wrote:

On Thursday 31 January 2002 19:02, Greg Velichansky wrote:


Greg V. (hmaon)

David Olofson wrote:

(And while we’re discussing wether or not high end OpenGL features
should be used, some still argue that you shouldn’t write a 2D game
that can’t run without OpenGL…)

That’s interesting. Last night I was thinking that there was no point
using anything BUT OpenGL for the graphics in a game…

Well, I guess it depends on what level of graphics quality you want.
Sure, great games is mostly about design, but sharp images with smooth
animation does make fast action games much more playable - and as many
gamers consider resolutions like 320x240 a thing of the past, software
rendering is just not an option on many targets.

An of course, if you really want to make use of the OpenGL power,
supporting software rendering will be lots of work, which isn’t very
likely to get done, especially not for free games…

I Think the question is really, what is the least common computer you
want your game to run on?

Exactly.

(Although requiring high end only OpenGL features just isn’t an option,
unless you’re coding for arcade machines based on such hardware. :slight_smile:

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Friday 01 February 2002 19:10, Bob Pendleton wrote:

Well, just like some cards may have FSAA that’s “correct” enough to
work for this, some cards may be able to do it with polygon
antialiazing - but both are features that are still mostly found on
high end cards.

Oh, really? Doh. I know it’s been in the OpenGL specs for a while, as I
only have the 1.1 edition of the red book (aka 2nd edition, I think).

Yeah… Simple lines have been around for quite a while as well, but very
few consumer cards accelerate them at all! (You can get GeForce 2 and 3
cards to do it if you use the right drivers and fool them into beleiving
you have a card with a “professional version” of the chip. The consumer
and high end versions of the chips are basically the same, except for
clock ratings.)

FSAA seems to mess up 2D images drawn through a 3D api quite often.
Artifacts like weird lines at tile edges appear. It also blurs text
needlessly, making it hard to read.

Side effects of cheap implementations - but nevertheless, that’s
efectively the state of the art, so we have to deal with it for now. That
is, don’t trust FSAA for 2D.

IMHO,
it’s a way too rare feature to be relied upon for such a critical
task. (It has to work, or you might as well disable subpixel
accurate rendering altogether.) Alternative methods must be supported
as well.

Makes sense. It’s probably worth it to make an api with several
back-end implementations, for newer 3D cards and for platforms without
3D. The SDL might be such an api. However, there’s no way right now to
tell SDL to do anti-aliasing tricks on blits so you’d need to either
have creative control over the next version of SDL or, much more
likely, write your own new mini api for blits.

I’ve considered hacking a “bonus feature” into glSDL, to allow you to
specify a decimal point position for surface coordinates;

void (gl)SDL_SetPrecision(int fraction_bits);

which would allow you to pass fixed point coordinates to
(gl)SDL_BlitSurface() and co. (Internally, it’s just a matter of using
32-N:N fixed point coordinates, and shifting arguments (fraction_bits-N)
bits. The higher levels of the Kobo Deluxe / Sptifire grahpics engine
does this all the time, as it uses 24:8 fixed point coordinates for
virtually everything internally.)

This would work great for sprites, but it cannot transparently deal with
tiled backgrounds and the like, so it’s only half a solution. (Same
problems as with FSAA and AA polygons for 3D, basically.)

On a fast CPU, you could implement nice sub-pixel accurate blitting
with just the 2D api of SDL using alpha blitting. You can blit to an
intermediate buffer with sub-pixel accuracy using the alpha channel for
anti-aliasing at image edges. Then blit the intermediate buffer to the
back-buffer with alpha blending. The intermediate buffer would be a
minimum of 1 pixel larger than the sprite you want to blit, I guess.
This has the advantage of being simple to implement, I think.

I’m not 100% sure that this would actually work with the SDL blitters,
but if it does, you need a very fast CPU!

It
wouldn’t be as fast as one blit routine that would write directly to
your back-buffer.

Right.

You’re probably better off pre-rendering an array of "sub pixel shifted"
versions of each image. The disadvantage is that you won’t get as high
accuracy without tons of images, but you also get an advantage (apart
from speed): You can use higher order interpolation and gamma
compensation to reduce the artifacts of the shifting.

(And while we’re discussing wether or not high end OpenGL features
should be used, some still argue that you shouldn’t write a 2D game
that can’t run without OpenGL…)

I wouldn’t dream of it. OK, I would dream of it but I wouldn’t actually
do it. It’s a shame to let all that power in new cards go to waste
though.

Well, that isn’t to say you shouldn’t use the features at all. Just
check which features are present, and preferably, provide user options to
turn them off, in case some cards/drivers fake them or implement them
poorly.

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Friday 01 February 2002 19:59, Greg Velichansky wrote:

On Fri, Feb 01, 2002 at 02:00:18AM +0100, David Olofson wrote:

On Thursday 31 January 2002 19:02, Greg Velichansky wrote: