Slow OpenGL code

Hi all,
I’m doing some test with OpenGl, but with strange results.

Here is some code:

bool UseOpenGL = true; // Use OpenGL
//bool UseOpenGL = false; // Use Standard SDL functions

if(UseOpenGL) {
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glClearColor( 0.0f, 0.0f, 0.0f, 0.5f );
glPointSize( 2.0f );
}

bool done = false;
int NumberOfPoints = 0;
int PosX, PosY;

while (!done) {
PosX = rand() % screen_width;
PosY = rand() % screen_height;

if(!UseOpenGL) {
   R = rand() % 255;
   G = rand() % 255;
   B = rand() % 255;
   color = SDL_MapRGB(Surface->format, R, G, B);
   // this funcions changes surface->pixels and updates screen
   DrawPixel(PosX, PosY, color, true);
   NumberOfPoints++;
}
else {
   R = (float)rand() / RAND_MAX;
   G = (float)rand() / RAND_MAX;
   B = (float)rand() / RAND_MAX;
   glBegin(GL_POINTS);
      glColor3f( R , G , B );
      glVertex2d( PosX, PosY );
   glEnd();

   SDL_GL_SwapBuffers();
   NumberOfPoints++;
}

if( SDL_GetTicks() > StartTicks + 5000) { done = true; }

}

It’s a little intro for a game: it starts with a black screen,
then it draws random pixels with random colours.
After 5 seconds, it ends.

If variable UseOpenGL is true, I use OpenGL functions,
otherwise I use standard SDL functions.

The problem is that when UseOpenGL is false, I’m using
SDL functions and I can draw 243664 points;
with OpenGl functions, only 393

I’m using WinXP Pro with a GeForce2 MX card. My setup code:

Surface = SDL_SetVideoMode(640, 480, 0, videoFlags);

/* the flags to pass to SDL_SetVideoMode /
videoFlags = SDL_OPENGL; /
Enable OpenGL in SDL /
videoFlags |= SDL_GL_DOUBLEBUFFER; /
Enable double buffering /
videoFlags |= SDL_HWPALETTE; /
Store the palette in hardware */

/* This checks to see if surfaces can be stored in memory */
if ( videoInfo->hw_available ) videoFlags |= SDL_HWSURFACE;
else videoFlags |= SDL_SWSURFACE;

/* This checks if hardware blits can be done */
if ( videoInfo->blit_hw ) videoFlags |= SDL_HWACCEL;

/* Sets up OpenGL double buffering */
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );

I have several games that use OpenGl,
so I don’t think it could be a driver problem.

What’s wrong in my code ?–
SkunkGuru

Le Sun, 29 Apr 2007 14:50:32 +0200
SkunkGuru a ?crit:

   glBegin(GL_POINTS);
      glColor3f( R , G , B );
      glVertex2d( PosX, PosY );
   glEnd();

   SDL_GL_SwapBuffers();

You draw one single point before swapping screen buffers. Try to draw
more. I think your hardware can handle more points per frame :-).–
Patrice Mandin
WWW: http://pmandin.atari.org/
Programmeur Linux, Atari
Sp?cialit?: D?veloppement, jeux

Patrice Mandin ha scritto:

Le Sun, 29 Apr 2007 14:50:32 +0200
SkunkGuru <@SkunkGuru> a ?crit:

   glBegin(GL_POINTS);
      glColor3f( R , G , B );
      glVertex2d( PosX, PosY );
   glEnd();

   SDL_GL_SwapBuffers();

You draw one single point before swapping screen buffers. Try to draw
more. I think your hardware can handle more points per frame :-).

yes, but :

1- why SDL can draw thousand more point than OpenGL ??
(243664 vs 393)

2- if I swap buffers every 250 points, I got 99501 point…
but it’s still 1/3 than SDL…

3- I get 393 point with double buffer, and still 393 point
WITHOUT double buffer… so there’s no swap…
and it should write directly to video, I think

I think I made a mistake, but I can’t see it :-)–
SkunkGuru

Just to help a bit…

Think of every GL calls as a message that has to go from the CPU, to
the motherboard, pass a couple more places, then arrive at the
videocard and then deliver.
The more gl calls you make the slower it will be, and that depends on
overall PC performance (not videocard performance).

Also think of every glBegin()…glEnd() calls as a way of saying: “I
gave you all there is, now work it out”.
Sending one point withing the glBegin/glEnd is like "Here’s a point,
work on it then take a break until i have another one"
You get limited by your PC overall performance.

Sending 1000 points within a glBegin/glEnd pair is not the same AT ALL
than sending 1000 pairs with one point in each.

OpenGL works best when given all the information, then given the order
to process:
“Here’s all the points, now flush that on screen”

Even if you just want to add just a couple more points this frame, it
is best to redraw the whole from scratch. With today’s vidcards, you
can expect to work on at least a couple thousand points per frame if
not even millions! (Well at least i used to back in 2000)

On a point-per-point basis, SDL is faster, it has a more direct access
to pixels (though it depends on the driver too).

Also I saw you were using double buffering. This means you have two
draw buffers.
You draw a couple points on FB1, flush, ok. You draw a couple points
on FB2, flush and you Just see FB2’s points. Draw some more points on
FB1, flush and dont see FB2’s points…etc etc etc…
You see that your points are being spread between the two buffers.

Again, best is to give OpenGL all you need for This specific frame,
make it work hard, make it create the whole world from nothingness
every frame.

SimonOn 4/29/07, SkunkGuru wrote:

Patrice Mandin ha scritto:

Le Sun, 29 Apr 2007 14:50:32 +0200
SkunkGuru a ?crit:

   glBegin(GL_POINTS);
      glColor3f( R , G , B );
      glVertex2d( PosX, PosY );
   glEnd();

   SDL_GL_SwapBuffers();

You draw one single point before swapping screen buffers. Try to draw
more. I think your hardware can handle more points per frame :-).

yes, but :

1- why SDL can draw thousand more point than OpenGL ??
(243664 vs 393)

2- if I swap buffers every 250 points, I got 99501 point…
but it’s still 1/3 than SDL…

3- I get 393 point with double buffer, and still 393 point
WITHOUT double buffer… so there’s no swap…
and it should write directly to video, I think

I think I made a mistake, but I can’t see it :slight_smile:


SkunkGuru


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Simon ha scritto:

Also think of every glBegin()…glEnd() calls as a way of saying: “I
gave you all there is, now work it out”.

so, every frame I should have only a few Begin/End section ?
or having a lot of begin/end calls in the same frame is not
important ?

I mean if this:
glBegin(GL_TRIANGLES);
// draw 100 triangles
glEnd();
SDL_GL_SwapBuffers( );

is better than this

glBegin(GL_TRIANGLES);
// draw ONE triangle
glEnd();
glBegin(GL_TRIANGLES);
// draw ONE triangle
glEnd();
…repeated 100 times…

SDL_GL_SwapBuffers( );

Sending 1000 points within a glBegin/glEnd pair is not the same AT ALL
than sending 1000 pairs with one point in each.

it seems this reply to my previous question :slight_smile:
right ?

Also I saw you were using double buffering. This means you have two
draw buffers.

yes, but without double buffers I get the same (bad) performance
but without double buffer shouldn’t it write directly to the video card
memory ?

You see that your points are being spread between the two buffers.

yes, I should draw the same points two times;
I know this but I had a bigger problem :smiley:

Again, best is to give OpenGL all you need for This specific frame,
make it work hard, make it create the whole world from nothingness
every frame.

ok, but if I can’t draw one pixel per frame,
how can I get the same effect that I get with SDL ?
I mean, the appearing random points ?
Only swapping every 250/500/1000 points ?–
SkunkGuru

It’s a little intro for a game: it starts with a black screen,
then it draws random pixels with random colours.
After 5 seconds, it ends.

If variable UseOpenGL is true, I use OpenGL functions,
otherwise I use standard SDL functions.

The problem is that when UseOpenGL is false, I’m using
SDL functions and I can draw 243664 points;
with OpenGl functions, only 393

You aren’t running on a monitor with a refresh rate around 80Hz, are
you? Because it looks to me like if you were running a refresh rate
of 80Hz, then you’re likely synced with your monitor’s refresh rate,
getting you (plus or minus timing error) 80 * 5 = ~400 refreshes, and
if only one dot per refresh then you get about that many dots.

My recommendation is to have a target number of dots you want drawn,
and then draw enough more each frame so that they show up appropriately.

Also, the trouble in OpenGL is that indeed, you should be redrawing
everything every frame. For this I recommend either having a texture
that you constantly update to have the pixels (this is pretty dirty,
though) or keep an array of 1x1 pixel polygons (you may even try
polygons here, as it may do good) that you add to however many dots.
It occurs though, that on any hardware if you go for the numbers
you’re looking at with SDL you may get burned for framerate to redraw
THAT many polygons by the end frames…

What about render-to-texture? I’ve not done it, though I know NeHe
has a decent tutorial on the site for it. If you have a texture
roughly the size of your screen (to keep it down to one pixel at a
time), you could draw the texture on a screen-size quad, then render
however many new dots need rendered (probably safely in the realm of
only a few hundred to a few thousand polys) to a second texture, and
render that texture to a screen-size quad on the display. Then you
switch and draw the new (2nd) texture to the first texture in memory,
then the new number of dots, etc…

It’s complex, but I think you’ll be surprised how little it affects
your framerate. It’s also more complex for you, but I think that’s
because OpenGL is designed to make the simplest things the fastest,
so all of its textures are uploaded to video memory, which is pretty
static by nature, and to do more intricate things one must work
around the bare-bones system provided.

There may be a simpler option, however. THis is just off the top of
my head here.

Good luck!
– ScottOn Apr 29, 2007, at 6:50 AM, SkunkGuru wrote:

The other OpenGL advice you got was good, but a few SDL-specific points:

Surface = SDL_SetVideoMode(640, 480, 0, videoFlags);

/* the flags to pass to SDL_SetVideoMode /
videoFlags = SDL_OPENGL; /
Enable OpenGL in SDL /
videoFlags |= SDL_GL_DOUBLEBUFFER; /
Enable double buffering /
videoFlags |= SDL_HWPALETTE; /
Store the palette in hardware */

  • You’re updating videoFlags after calling SDL_SetVideoMode.
  • SDL_GL_DOUBLEBUFFER is not a flag you pass to SDL_SetVideoMode, so
    it’s not doing what you think here.

if ( videoInfo->hw_available ) videoFlags |= SDL_HWSURFACE;
else videoFlags |= SDL_SWSURFACE;

  • These are for 2D, not OpenGL.
  • Get this information from the (flags) field of SDL_Surface returned
    from SDL_SetVideoMode(). If you want a HWSURFACE, set it in
    SDL_SetVideoMode, and then check the surface to see if you got it.
  • If you want to write directly to the surface instead of using
    SDL_BlitSurface and SDL_FillRect, you don’t want HWSURFACE. Generally,
    you don’t want HWSURFACE unless you’re using 2D stuff and everything
    is in a HWSURFACE, so you can blit between surfaces in video memory.

/* Sets up OpenGL double buffering */
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );

  • You have to call this before SDL_SetVideoMode or it doesn’t do anything.

–ryan.

Ryan C. Gordon ha scritto:

  • You’re updating videoFlags after calling SDL_SetVideoMode.

yes, it was a copy/paste error :slight_smile:

  • SDL_GL_DOUBLEBUFFER is not a flag you pass to SDL_SetVideoMode, so
    it’s not doing what you think here.

good… I downloaded this Init code from NeHe site and
I didn’t check if it is correct… :smiley:

/* Sets up OpenGL double buffering */
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );

  • You have to call this before SDL_SetVideoMode or it doesn’t do anything.

yes, I already do this correctly, same copy/paste error :slight_smile:

Thanks,–
SkunkGuru

Scott Harper ha scritto:

You aren’t running on a monitor with a refresh rate around 80Hz, are
you? Because it looks to me like if you were running a refresh rate
of 80Hz, then you’re likely synced with your monitor’s refresh rate,
getting you (plus or minus timing error) 80 * 5 = ~400 refreshes, and
if only one dot per refresh then you get about that many dots.

my refresh is 85Hz, so it should be ~425 refreshes

[CUT]

thanks for your ideas :-)–
SkunkGuru

Make sure you have the latest video drivers for your video card (downloaded
from the manufacturers website) because XP shipped with GL drivers that were
supposedly intentionally slowed down to make directx seem like the better
choice for graphics API.

Whether it was intentional or not might be a debate in itself, but grab the
latest and see if it doesn’t clear things up for you> ----- Original Message -----

From: sdl-bounces@lists.libsdl.org [mailto:sdl-bounces at lists.libsdl.org] On
Behalf Of SkunkGuru
Sent: Sunday, April 29, 2007 5:51 AM
To: sdl at lists.libsdl.org
Subject: [SDL] Slow OpenGL code

Hi all,
I’m doing some test with OpenGl, but with strange results.

Here is some code:

bool UseOpenGL = true; // Use OpenGL
//bool UseOpenGL = false; // Use Standard SDL functions

if(UseOpenGL) {
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glClearColor( 0.0f, 0.0f, 0.0f, 0.5f );
glPointSize( 2.0f );
}

bool done = false;
int NumberOfPoints = 0;
int PosX, PosY;

while (!done) {
PosX = rand() % screen_width;
PosY = rand() % screen_height;

if(!UseOpenGL) {
   R = rand() % 255;
   G = rand() % 255;
   B = rand() % 255;
   color = SDL_MapRGB(Surface->format, R, G, B);
   // this funcions changes surface->pixels and updates screen
   DrawPixel(PosX, PosY, color, true);
   NumberOfPoints++;
}
else {
   R = (float)rand() / RAND_MAX;
   G = (float)rand() / RAND_MAX;
   B = (float)rand() / RAND_MAX;
   glBegin(GL_POINTS);
      glColor3f( R , G , B );
      glVertex2d( PosX, PosY );
   glEnd();

   SDL_GL_SwapBuffers();
   NumberOfPoints++;
}

if( SDL_GetTicks() > StartTicks + 5000) { done = true; }

}

It’s a little intro for a game: it starts with a black screen,
then it draws random pixels with random colours.
After 5 seconds, it ends.

If variable UseOpenGL is true, I use OpenGL functions,
otherwise I use standard SDL functions.

The problem is that when UseOpenGL is false, I’m using
SDL functions and I can draw 243664 points;
with OpenGl functions, only 393

I’m using WinXP Pro with a GeForce2 MX card. My setup code:

Surface = SDL_SetVideoMode(640, 480, 0, videoFlags);

/* the flags to pass to SDL_SetVideoMode /
videoFlags = SDL_OPENGL; /
Enable OpenGL in SDL /
videoFlags |= SDL_GL_DOUBLEBUFFER; /
Enable double buffering /
videoFlags |= SDL_HWPALETTE; /
Store the palette in hardware */

/* This checks to see if surfaces can be stored in memory */
if ( videoInfo->hw_available ) videoFlags |= SDL_HWSURFACE;
else videoFlags |= SDL_SWSURFACE;

/* This checks if hardware blits can be done */
if ( videoInfo->blit_hw ) videoFlags |= SDL_HWACCEL;

/* Sets up OpenGL double buffering */
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );

I have several games that use OpenGl,
so I don’t think it could be a driver problem.

What’s wrong in my code ?


SkunkGuru


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2007/4/30, Alan Wolfe :

Make sure you have the latest video drivers for your video card
(downloaded
from the manufacturers website) because XP shipped with GL drivers that
were
supposedly intentionally slowed down to make directx seem like the better
choice for graphics API.

yes, I already have the latest Nvidia drivers
:-)–
SkunkGuru

To randomly make points pop up on the screen:

  • Create a software SDL surface

  • Copy the empty surface into video memory

  • Sleep a couple of ms

  • Time how long that took

Loop until enough points are drawn:

  • Draw a few points into it (depending on the time it took for the previous
    drawing
  • Update the screen
  • Sleep a couple of ms

That will fill up your screen with points at a consistent speed on all your
targets

The only real difference between OpenGL & SDL’s 2d routines is how you
transfer your surface to the screen. In SDL it’s done using SDL_BlitSurface
and SDL_UpdateRect(), in OpenGL using glTexImage2D() (or glTexSubImage2D()),
drawing a creen sized textured rectangle and a final SDL_GL_SwapBuffers().

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org>