2D with openGL & SDL

Hi, I’ve planned to change my application from pure blitting with 2D SDL
surface to using openGL. I plan to use a textured quad as sprites. But I
still have some doubts about this so I wish to ask some insight about
this. Here’s my questions:

  1. Performance wise, will this make my application faster or slower? FYI,
    the intended machine is a VIA EPIA Mini ITX embedded with Linux openSuSE
    10.1 as the OS.

  2. Do I need to implement shader languages (I think maybe Cg)?

  3. What’s the settings for openGL so that the 2D environment looked just
    like when I blit it directly to an SDL Surface (eg, the viewport, 1 points
    equals to 1 pixel, etc)?

  4. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

Thanks in advance.

Fare thee well,
Bawenang R. P. P.----------------
ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”

benang at cs.its.ac.id wrote:

  1. Performance wise, will this make my application faster or slower? FYI,
    the intended machine is a VIA EPIA Mini ITX embedded with Linux openSuSE
    10.1 as the OS.

On this point I’m really not sure, but I would guess that, so long as you have a good graphics card, and you keep the sprites in video RAM as textures, it’ll probably be faster. Plus, certain types of operations, such as scaling or rotating sprites, will be much faster.

  1. Do I need to implement shader languages (I think maybe Cg)?

Not unless you want to do some pretty fancy special effects. If your game is based entirely on drawing 2D images on top of each other with alpha channels, then just use OpenGL’s standard blending modes and you’ll be fine.

  1. What’s the settings for openGL so that the 2D environment looked just
    like when I blit it directly to an SDL Surface (eg, the viewport, 1 points
    equals to 1 pixel, etc)?

Leave the viewport alone unless you actually want to restrict drawing to limited portion of the screen. The standard state of the viewport is the same as the screen/window that the game is drawn into.

For setting it up to have 1 point per pixel, gluOrtho2D() is your friend. Specifically:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);

Where “width” and “height” are the screen resolution.

Note that OpenGL’s coordinate system is based in the lower-left corner, not the upper-left corner like you might be used to. I BELIEVE the the call to gluOrtho2D, as I just demonstrated, will flip it vertically so it will map to what you expect. However, no promises on that. If that doesn’t work, you can always use the modelview matrix to scale everything by -1.0 on the y axis.

  1. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been fixed by OpenGL 2.0. Not sure, offhand, exactly which version made the shift, but if your version is up to date, you should be able to use any texture size. Provided, of course, it’s not larger then your implementation’s maximum texture size. These days, that’s usually either 512 or 1024, so you probably won’t have a problem there.

Mike Powell wrote:

benang at cs.its.ac.id wrote:

  1. Performance wise, will this make my application faster or slower? FYI,
    the intended machine is a VIA EPIA Mini ITX embedded with Linux openSuSE
    10.1 as the OS.

On this point I’m really not sure, but I would guess that, so long as you have a good graphics card, and you keep the sprites in video RAM as textures, it’ll probably be faster. Plus, certain types of operations, such as scaling or rotating sprites, will be much faster.

I think your graphics chip is a VIA Unichrome Pro. For usual
applications the 3D functionality of this chip will be faster. As Mike
said, rotation and scaling is for free. But i’ve encountered some limits
with this chip:
lines and points rendered with opengl were very slow on this chip
(glBegin(GL_LINES)). If you need to draw lines, you should render a thin
quad.
Also i wasn’t able to make use of the multitexture feature, although the
chip exposes the appropriate extension. Same code worked on every other
graphics card. If you have to do complex blendings you might have to do
it by several steps (like you would do now).
But this may have been fixed with new driver versions.

  1. Do I need to implement shader languages (I think maybe Cg)?

Not unless you want to do some pretty fancy special effects. If your game is based entirely on drawing 2D images on top of each other with alpha channels, then just use OpenGL’s standard blending modes and you’ll be fine.

In case of the VIA Unichrome you won’t be able to do that, since the
chip doesn’t support any shading language. Maybe Unichrome Pro does.

  1. What’s the settings for openGL so that the 2D environment looked just
    like when I blit it directly to an SDL Surface (eg, the viewport, 1 points
    equals to 1 pixel, etc)?

Leave the viewport alone unless you actually want to restrict drawing to limited portion of the screen. The standard state of the viewport is the same as the screen/window that the game is drawn into.

For setting it up to have 1 point per pixel, gluOrtho2D() is your friend. Specifically:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);

Where “width” and “height” are the screen resolution.

Note that OpenGL’s coordinate system is based in the lower-left corner, not the upper-left corner like you might be used to. I BELIEVE the the call to gluOrtho2D, as I just demonstrated, will flip it vertically so it will map to what you expect. However, no promises on that. If that doesn’t work, you can always use the modelview matrix to scale everything by -1.0 on the y axis.

  1. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been fixed by OpenGL 2.0. Not sure, offhand, exactly which version made the shift, but if your version is up to date, you should be able to use any texture size. Provided, of course, it’s not larger then your implementation’s maximum texture size. These days, that’s usually either 512 or 1024, so you probably won’t have a problem there.

Again, the onboard graphics card might be far away from OpenGL 2.0. I
suggest to look for the extensions provided by your graphics chip. For
the use of non-power-of-two-textures OpenGL 2.0 would be great, but the
extension GL_ARB_texture_rectangle supports it, too (less comfortable
and minor limits). I think the Unichrome chip doesn’t support any of
these. So you have to preprocess images by yourself or let gluScaleImage
do it for you on the fly.

benang at cs.its.ac.id wrote:

  1. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made the
shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch <@Sean_Middleditch>

benang at cs.its.ac.id wrote:

  1. AFAIK, to add a texture, the width & height of the image
    should be a
    power of 2. So how do I make a sprite with width and/or height
    not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made
the
shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think 2048 is common.

In our game, we use only power of two textures to be more compatible
and we also noticed a speed boost on some platforms/card.

We simply fill the texture with transparent pixels and we adjust the
coordinates to display the image correctly.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass
those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger
squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)

Also, for animations (sprites), having one texture is faster, as you
can keep it and only change the coordinates.

For tiles, I once worked on one engine, and we used one texture for a
couple of reasons:

  • A bit faster
  • Able to create bigger tiles, or pre-joined tiles (like a mountain
    with a forest around, let’s say its 5 tiles, you would be able to
    draw this in one pass)
  • Code is easier to manager, for example, let’s take the night
    filter, you apply it once to the texture and you are done.

RegardsOn 17 Jul 2007, at 5:31 PM, Sean Middleditch wrote:

On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:

Kuon

"Don’t press that button."
http://goyman.com/
Blog: http://kuon.goyman.com/

Thanks for the reply. The graphics card is a VIA Unichrome. And as
Matthias Weigand said, probably it’s not as good as I want it to be. Also
the VGA has no driver for Linux. So I use the standard one in the machine.
I think I should reconsider this some more. But I think I will still
redesign my rendering engine (or recreate it again from scratch) with
openGL, but not for the current game. Thanks again.

Mike Powell said:

@benang_at_cs.its.ac wrote:

  1. Performance wise, will this make my application faster or slower?
    FYI,
    the intended machine is a VIA EPIA Mini ITX embedded with Linux openSuSE
    10.1 as the OS.

On this point I’m really not sure, but I would guess that, so long as you
have a good graphics card, and you keep the sprites in video RAM as
textures, it’ll probably be faster. Plus, certain types of operations,
such as scaling or rotating sprites, will be much faster.

  1. Do I need to implement shader languages (I think maybe Cg)?

Not unless you want to do some pretty fancy special effects. If your game
is based entirely on drawing 2D images on top of each other with alpha
channels, then just use OpenGL’s standard blending modes and you’ll be
fine.

  1. What’s the settings for openGL so that the 2D environment looked just
    like when I blit it directly to an SDL Surface (eg, the viewport, 1
    points
    equals to 1 pixel, etc)?

Leave the viewport alone unless you actually want to restrict drawing to
limited portion of the screen. The standard state of the viewport is the
same as the screen/window that the game is drawn into.

For setting it up to have 1 point per pixel, gluOrtho2D() is your friend.
Specifically:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);

Where “width” and “height” are the screen resolution.

Note that OpenGL’s coordinate system is based in the lower-left corner,
not the upper-left corner like you might be used to. I BELIEVE the the
call to gluOrtho2D, as I just demonstrated, will flip it vertically so it
will map to what you expect. However, no promises on that. If that doesn’t
work, you can always use the modelview matrix to scale everything by -1.0
on the y axis.

  1. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been fixed by
OpenGL 2.0. Not sure, offhand, exactly which version made the shift, but
if your version is up to date, you should be able to use any texture size.
Provided, of course, it’s not larger then your implementation’s maximum
texture size. These days, that’s usually either 512 or 1024, so you
probably won’t have a problem there.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.----------------
ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”

Yeah, it’s a VIA Unichrome. According to what you said, this card is
probably a low-end one, right? I am having a second thought about this
now. Maybe I won’t implement openGL to the current game, but I will still
try to redesign my rendering engine using openGL. I want to see how it
turns out. Thanks.

Matthias Weigand said:

Mike Powell wrote:

@benang_at_cs.its.ac wrote:

  1. Performance wise, will this make my application faster or slower?
    FYI,
    the intended machine is a VIA EPIA Mini ITX embedded with Linux
    openSuSE
    10.1 as the OS.

On this point I’m really not sure, but I would guess that, so long as
you have a good graphics card, and you keep the sprites in video RAM as
textures, it’ll probably be faster. Plus, certain types of operations,
such as scaling or rotating sprites, will be much faster.

I think your graphics chip is a VIA Unichrome Pro. For usual
applications the 3D functionality of this chip will be faster. As Mike
said, rotation and scaling is for free. But i’ve encountered some limits
with this chip:
lines and points rendered with opengl were very slow on this chip
(glBegin(GL_LINES)). If you need to draw lines, you should render a thin
quad.
Also i wasn’t able to make use of the multitexture feature, although the
chip exposes the appropriate extension. Same code worked on every other
graphics card. If you have to do complex blendings you might have to do
it by several steps (like you would do now).
But this may have been fixed with new driver versions.

  1. Do I need to implement shader languages (I think maybe Cg)?

Not unless you want to do some pretty fancy special effects. If your
game is based entirely on drawing 2D images on top of each other with
alpha channels, then just use OpenGL’s standard blending modes and
you’ll be fine.

In case of the VIA Unichrome you won’t be able to do that, since the
chip doesn’t support any shading language. Maybe Unichrome Pro does.

  1. What’s the settings for openGL so that the 2D environment looked
    just
    like when I blit it directly to an SDL Surface (eg, the viewport, 1
    points
    equals to 1 pixel, etc)?

Leave the viewport alone unless you actually want to restrict drawing to
limited portion of the screen. The standard state of the viewport is the
same as the screen/window that the game is drawn into.

For setting it up to have 1 point per pixel, gluOrtho2D() is your
friend. Specifically:

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, width, 0, height);

Where “width” and “height” are the screen resolution.

Note that OpenGL’s coordinate system is based in the lower-left corner,
not the upper-left corner like you might be used to. I BELIEVE the the
call to gluOrtho2D, as I just demonstrated, will flip it vertically so
it will map to what you expect. However, no promises on that. If that
doesn’t work, you can always use the modelview matrix to scale
everything by -1.0 on the y axis.

  1. AFAIK, to add a texture, the width & height of the image should be a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been fixed
by OpenGL 2.0. Not sure, offhand, exactly which version made the shift,
but if your version is up to date, you should be able to use any texture
size. Provided, of course, it’s not larger then your implementation’s
maximum texture size. These days, that’s usually either 512 or 1024, so
you probably won’t have a problem there.

Again, the onboard graphics card might be far away from OpenGL 2.0. I
suggest to look for the extensions provided by your graphics chip. For
the use of non-power-of-two-textures OpenGL 2.0 would be great, but the
extension GL_ARB_texture_rectangle supports it, too (less comfortable
and minor limits). I think the Unichrome chip doesn’t support any of
these. So you have to preprocess images by yourself or let gluScaleImage
do it for you on the fly.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.----------------
ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”

Yeah, I was thinking like that (using transparent pixels). But I’m afraid
it will take up more memory. FYI, I usually load all the surfaces needed
for the entire game so that the game doesn’t need a loading time overhead
beside when the game was first called. And currently there’s 170+ megs of
BMP files for the game and all of them are loaded at once. If I use
transparent pixels to make the surface a power of 2, it will take a lot
more than 170 megs. Also it will require the 2D artist to remake all those
assets. I think I won’t implement it to the current game after all. Thanks
anyway.

Kuon said:

@benang_at_cs.its.ac wrote:

  1. AFAIK, to add a texture, the width & height of the image
    should be a
    power of 2. So how do I make a sprite with width and/or height
    not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made
the
shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think 2048 is common.

In our game, we use only power of two textures to be more compatible
and we also noticed a speed boost on some platforms/card.

We simply fill the texture with transparent pixels and we adjust the
coordinates to display the image correctly.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass
those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger
squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)

Also, for animations (sprites), having one texture is faster, as you
can keep it and only change the coordinates.

For tiles, I once worked on one engine, and we used one texture for a
couple of reasons:

  • A bit faster
  • Able to create bigger tiles, or pre-joined tiles (like a mountain
    with a forest around, let’s say its 5 tiles, you would be able to
    draw this in one pass)
  • Code is easier to manager, for example, let’s take the night
    filter, you apply it once to the texture and you are done.

Regards

Kuon

"Don’t press that button."
http://goyman.com/
Blog: http://kuon.goyman.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.> On 17 Jul 2007, at 5:31 PM, Sean Middleditch wrote:

On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:


ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”

Oh yeah, there’s also that approach. As a matter of fact, actually my
current rendering engine already can blit sprites from an image file with
multiple sprite images inside it. But I determined that the sprite images
in the BMP file must be of the same width and height (eg. all must be
16x16, 40x30, etc). I use this to blit my bitmap fonts. This probably will
make the 2D artist complains. Well thanks anyway.

Sean Middleditch said:

@benang_at_cs.its.ac wrote:

  1. AFAIK, to add a texture, the width & height of the image should be
    a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made the
shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.> On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:


ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”

benang at cs.its.ac.id wrote:

Thanks for the reply. The graphics card is a VIA Unichrome. And as
Matthias Weigand said, probably it’s not as good as I want it to be. Also
the VGA has no driver for Linux. So I use the standard one in the machine.
I think I should reconsider this some more. But I think I will still
redesign my rendering engine (or recreate it again from scratch) with
openGL, but not for the current game. Thanks again.

Okay, so come back to this concept when you can get a better graphics card, then? :slight_smile:

Personally, I’m curious how it would work out. While I’ve considered doing something purely 2D in OpenGL, I’ve never actually tried it, and don’t know how the speed compares.

I’ve also considered using OpenGL with GLSL (which I guess you don’t have access to) to do a purely sprite-based isometric engine, where every sprite has a normal map and depth map, allowing for full, fast per-pixel lighting. Which is really neat in concept.

But then I have to think how much work that would be, compared to just doing it in 3D to begin with, which will generally produce better results anyway.

I did a 2d side scroller in GL

Extremely fast rendering, with “free” rotation, scaling, skewing, tinting,
alpha, etc.

It was a good experience :P> ----- Original Message -----

From: sdl-bounces@lists.libsdl.org [mailto:sdl-bounces at lists.libsdl.org] On
Behalf Of Mike Powell
Sent: Wednesday, July 18, 2007 12:33 AM
To: A list for developers using the SDL library. (includes SDL-announce)
Subject: Re: [SDL] 2D with openGL & SDL

benang at cs.its.ac.id wrote:

Thanks for the reply. The graphics card is a VIA Unichrome. And as
Matthias Weigand said, probably it’s not as good as I want it to be. Also
the VGA has no driver for Linux. So I use the standard one in the machine.
I think I should reconsider this some more. But I think I will still
redesign my rendering engine (or recreate it again from scratch) with
openGL, but not for the current game. Thanks again.

Okay, so come back to this concept when you can get a better graphics card,
then? :slight_smile:

Personally, I’m curious how it would work out. While I’ve considered doing
something purely 2D in OpenGL, I’ve never actually tried it, and don’t know
how the speed compares.

I’ve also considered using OpenGL with GLSL (which I guess you don’t have
access to) to do a purely sprite-based isometric engine, where every sprite
has a normal map and depth map, allowing for full, fast per-pixel lighting.
Which is really neat in concept.

But then I have to think how much work that would be, compared to just doing
it in 3D to begin with, which will generally produce better results anyway.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Oh yeah, there’s also that approach. As a matter of fact, actually my
current rendering engine already can blit sprites from an image file with
multiple sprite images inside it. But I determined that the sprite images
in the BMP file must be of the same width and height (eg. all must be
16x16, 40x30, etc). I use this to blit my bitmap fonts. This probably will
make the 2D artist complains. Well thanks anyway.

::sigh:: Think about it a little more. You’re dismissing a very
elegant and correct solution simply because it isn’t the same as your
existing solution.

There is no reason that all of the sprites in one BMP have to be the
same size. At all. As I said, each sprite can have its own width,
height, and its own offset within the texture image. It takes a little
more code than making everything the same size, but it’s worth it.

It doesn’t take any real additional work on part of the artists, either,
especially if you’re already using packed sprite image files.

Note that this is exactly how just about any 3D game handles model
textures. Take a complex model like a medieval knight. Most engines
will use a single texture bitmap which is packed with the required
patches of chain maille, flesh, cloth, hair, eyes, etc. and map that
single texture over the model’s vertexes. Most professional 2D games
(i.e, the actual SNES golden games) did the same thing. Most tiled/2D
games get that wrong, because most tile rips you’ll find of those old
games are done with screenshots and an image editor, not the actual
image packs from the ROM. There’s a reason the old games did it that
way - it worked, it was memory efficient, and it was easy for artists to
handle.On Wed, 2007-07-18 at 10:38 +0700, benang at cs.its.ac.id wrote:

Sean Middleditch said:

On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:

benang at cs.its.ac.id wrote:

  1. AFAIK, to add a texture, the width & height of the image should be
    a
    power of 2. So how do I make a sprite with width and/or height not of
    power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made the
shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch <@Sean_Middleditch>


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.


ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch <@Sean_Middleditch>

Well, I’m still going to try remaking my engine again. But not for the
current game because it will require a lot of redesigning. Whether it’s
the game itself, the engine, or the artwork. Because the current artwork
is 1 BMP file for 1 sprite (except for the bitmap fonts). That’s what the
artist usually do, so I just go with the flow and make my rendering engine
accommodate the artist’s way to make assets.

Probably I’ll just make a newer engine for the next project. And for this
new engine I’ll ask the artist to make the each BMP file consists of
several sprites.

FYI, the next project is actually a 3D. But the project leader still
trying to determine whether we’ll go pure 3D or a pre-rendered 3D assets
drawn in a 2D environment. For pure 3D he said we’ll use Torque, but for
pre-rendered, I just have to remake my SDL engine.

Thanks a lot though.

Sean Middleditch said:

Oh yeah, there’s also that approach. As a matter of fact, actually my
current rendering engine already can blit sprites from an image file
with
multiple sprite images inside it. But I determined that the sprite
images
in the BMP file must be of the same width and height (eg. all must be
16x16, 40x30, etc). I use this to blit my bitmap fonts. This probably
will
make the 2D artist complains. Well thanks anyway.

::sigh:: Think about it a little more. You’re dismissing a very
elegant and correct solution simply because it isn’t the same as your
existing solution.

There is no reason that all of the sprites in one BMP have to be the
same size. At all. As I said, each sprite can have its own width,
height, and its own offset within the texture image. It takes a little
more code than making everything the same size, but it’s worth it.

It doesn’t take any real additional work on part of the artists, either,
especially if you’re already using packed sprite image files.

Note that this is exactly how just about any 3D game handles model
textures. Take a complex model like a medieval knight. Most engines
will use a single texture bitmap which is packed with the required
patches of chain maille, flesh, cloth, hair, eyes, etc. and map that
single texture over the model’s vertexes. Most professional 2D games
(i.e, the actual SNES golden games) did the same thing. Most tiled/2D
games get that wrong, because most tile rips you’ll find of those old
games are done with screenshots and an image editor, not the actual
image packs from the ROM. There’s a reason the old games did it that
way - it worked, it was memory efficient, and it was easy for artists to
handle.

Sean Middleditch said:

@benang_at_cs.its.ac wrote:

  1. AFAIK, to add a texture, the width & height of the image should
    be

a

power of 2. So how do I make a sprite with width and/or height not
of

power of 2?

This was true in the old versions of OpenGL, but I know it’s been
fixed by OpenGL 2.0. Not sure, offhand, exactly which version made
the

shift, but if your version is up to date, you should be able to use
any texture size. Provided, of course, it’s not larger then your
implementation’s maximum texture size. These days, that’s usually
either 512 or 1024, so you probably won’t have a problem there.

I think OpenGl 1.5 also has this restriction removed, and it’s quite
possible the driver supports 1.5 but not 2.0. (2.0 makes GLSL
required.)

There are various things you can do to alleviate the texture
limitations, though. One trick is to pack multiple
tiles/images/textures into a single OpenGL texture, which if done
carefully, will let you use a max sized texture with little wasted
space. You then just have to remember the texture ID, width, height,
x/s offset, and y/t offset of your image in the texture, and pass
those
various parameters in when rendering your quads. It can also help to
just design your graphics around powers of two dimensions if at all
possible. Especially if you’re making a tile-based game and not some
other kind of app, splitting up your tiles into smaller 16x16 or 32x32
bits would be a good idea. (Few people realize this, but the tiles in
old 2D Nintendo games were actually very very small - most of the tile
"rips" you find online don’t actually rip the tiles, but larger
squares
made of multiple tiles, so a lot of people think they’re bigger than
they really are.)


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.


ERROR: Brain not found. Please insert a new brain!

???Do nothing which is of no use.??? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Sean Middleditch


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fare thee well,
Bawenang R. P. P.> On Wed, 2007-07-18 at 10:38 +0700, @benang_at_cs.its.ac wrote:

On Mon, 2007-07-16 at 22:08 -0700, Mike Powell wrote:


ERROR: Brain not found. Please insert a new brain!

?Do nothing which is of no use.? - Miyamoto Musashi.

“I live for my dream. And my dream is to live my life to the fullest.”