SDL2 and opengl with intel graphics on windows xp and vista

Hi,
I have a game that uses OpenGL with SDL1 and recently updated to SDL2. The
game
starts up with a blank screen when using sdl renderer. If I use gl context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent.
When you create a context, it’s made current automatically. Unless
you have more than one GL context, the one you created should always
be current. That shouldn’t bog your code down too much, but it’s the
only difference I can see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho
(and resets it upon resize of your window). If you’re trying to set
up a projection matrix with glFrustum, you’ll first want to set the
matrix mode to GL_PROJECTION and load the identity. Then create a
frustum matrix. If you’re not doing that, you’re likely to wind up
with screen the color of whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values
for glFrustum, you can shave a few cycles by not doing the trig over
and over again.

This probably won’t stop your slowness problem in and of itself, but
it should get both cases above working and at approximately the same
speed.

JosephOn Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,
I have a game that uses OpenGL with SDL1 and recently updated to SDL2. The
game
starts up with a blank screen when using sdl renderer. If I use gl context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

Thanks for pointing that out. I have checked my codes and the libraries
source. None are using glFrustum. I have also do a test run on osx 10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and windows 7.
He gets the same thing as running on my xp and vista. It seems to affect on
those with intel integrated.On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,
I have a game that uses OpenGL with SDL1 and recently updated to SDL2. The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent. When you
create a context, it’s made current automatically. Unless you have more
than one GL context, the one you created should always be current. That
shouldn’t bog your code down too much, but it’s the only difference I can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix mode
to GL_PROJECTION and load the identity. Then create a frustum matrix. If
you’re not doing that, you’re likely to wind up with screen the color of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values for
glFrustum, you can shave a few cycles by not doing the trig over and over
again.

This probably won’t stop your slowness problem in and of itself, but it
should get both cases above working and at approximately the same speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

A projection matrix tends to get set in one only a few ways in the
fixed pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to
change amongst few static projection geometries often. The code
might use that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho
thusly:

glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know.
And by the way, those are floats (doubles actually), even though
we’ve set it so integer points map 1:1 to pixels. You can still
specify a “half pixel” and see it with anti-aliasing drivers. But
that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on
depth testing:

glEnable(GL_DEPTH_TEST);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

glBegin(GL_QUADS);
glColor3ub(0xff, 0x00, 0x00);
glVertex3f( 8.0,  8.0, 0.5);
glVertex3f(24.0,  8.0, 0.5);
glVertex3f(24.0, 24.0, 0.5);
glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed
have a 16x16 square located near the upper left corner of the screen.
You can easily see how this gets used for 2D overlays/HUDs on 3D
games?just use a white glColor and map the texture with your HUD
graphics and use a few glTexCoord2f’s in there. (This by the way is
pretty much what SDL’s renderer does!)

But we’re going to draw another square:

glColor3ub(0x00, 0x00, 0xff);
glVertex3f(12.0, 12.0, 0.6);
glVertex3f(32.0, 12.0, 0.6);
glVertex3f(32.0, 32.0, 0.6);
glVertex3f(12.0, 32.0, 0.6);
glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with
his shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about
depth sorting or culling out hidden polygons for accurate rendering.
(Speed is another matter.) That goes for 2D and 3D, but you can use
it in an OpenGL-rendered 2D game to replace sprite layers. For four
layers, just use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0
zNear and zFar) and set glDepthFunc(GL_LEQUAL) so that you can get
the software-renderer-like drawing on top of existing sprites on the
same layer. Actually, you could have as many layers as your float
precision allows if you wanted them. Just stay within zNear and zFar
or the driver will clip what you draw out as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What
we see is more like a cone. But not really quite a cone, because the
surface of our eye is not quite a point. Since our screens are
rectangular, and because computers are limited otherwise, we actually
have something called a frustum. A frustum is a geometric solid cut
by two parallel planes. In OpenGL, it’s a rectangular pyramid cut
with a base at zFar and turned sideways and cut at zNear. Anything
closer than zNear is clipped, and so is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to
put X and Y zero at the center of the screen with the eye at the
origin and the positive z axis moving away from you in the forward
direction.

No code to explain this one since I couldn’t come up with a useful
example of how it’s different just out of thin air. Basically if you
draw a checkerboard tile from xMin to xMax and extend it back along
the z axis, you’ll see that the tiles aren’t strictly squares.
They’re trapezoids that get smaller as they head off toward zFar.
That kind of distance distortion is what a frustum is for because it
looks totally natural until you really start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you,
but the most common way to get them is gluPerspective or its
equivalent. It takes yFOV and an aspect ratio to calculate these
things along with the now familiar zNear and zFar. For our 640x480
window, gluPerspective might be called thusly:

gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s
call glFrustum ourselves:

yFOV = 45.0;
aspect = 640 / (GLdouble)480;
zNear = 0.6;
zFar = 100.0;

yMax = tan( yFOV / 360 * M_PI ) * zNear;
xMax = yMax * aspect;
/* yMin and xMin are trivial */

glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you
don’t have to recalculate the tangent or convert yFOV to radians over
and over again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do
what you think only when the current matrix is the identity. You get
that by setting glMatrixMode(GL_PROJECTION) and glLoadIdentity().
The identity matrix is this:

GLdouble identity[16] = {
    1.0, 0.0, 0.0, 0.0,
    0.0, 1.0, 0.0, 0.0,
    0.0, 0.0, 1.0, 0.0,
    0.0, 0.0, 0.0, 1.0 
};

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

GLdouble matrix[16] = {
    m_0, m_1, m_2, m_3,
    m_4, m_5, m_6, m_7,
    m_8, m_9, m_a, m_b,
    m_c, m_d, m_e, m_f 
};

is mathematically equivalent to:

| m_0 m_4 m_8 m_c |
| m_1 m_5 m_9 m_d |
| m_2 m_6 m_a m_e |
| m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense
looking at source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what
I’ll talk about here, since if what you’re porting does that you
wouldn’t be having the black screen problem in the first place. It’s
super easy to make an ortho matrix. A frustum matrix takes a little
more effort. And of course you can do even more interesting things,
but people rarely do. :slight_smile:

Hopefully that gives you some basis for debugging.

JosephOn Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the libraries
source. None are using glFrustum. I have also do a test run on osx 10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and windows 7.
He gets the same thing as running on my xp and vista. It seems to affect on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < @T_Joseph_Carter> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,
I have a game that uses OpenGL with SDL1 and recently updated to SDL2. The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent. When you
create a context, it’s made current automatically. Unless you have more
than one GL context, the one you created should always be current. That
shouldn’t bog your code down too much, but it’s the only difference I can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix mode
to GL_PROJECTION and load the identity. Then create a frustum matrix. If
you’re not doing that, you’re likely to wind up with screen the color of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values for
glFrustum, you can shave a few cycles by not doing the trig over and over
again.

This probably won’t stop your slowness problem in and of itself, but it
should get both cases above working and at approximately the same speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Fixed it. Turns out SDL_Init(SDL_INIT_EVERYTHING)
specifically SDL_INIT_VIDEO is causing the problem. Once I exclude it, the
games shows.On Wed, Nov 6, 2013 at 3:53 AM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

A projection matrix tends to get set in one only a few ways in the fixed
pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to change
amongst few static projection geometries often. The code might use that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho thusly:

    glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know. And by
the way, those are floats (doubles actually), even though we’ve set it so
integer points map 1:1 to pixels. You can still specify a “half pixel” and
see it with anti-aliasing drivers. But that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on depth
testing:

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

    glBegin(GL_QUADS);
    glColor3ub(0xff, 0x00, 0x00);
    glVertex3f( 8.0,  8.0, 0.5);
    glVertex3f(24.0,  8.0, 0.5);
    glVertex3f(24.0, 24.0, 0.5);
    glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed have a
16x16 square located near the upper left corner of the screen. You can
easily see how this gets used for 2D overlays/HUDs on 3D games?just use a
white glColor and map the texture with your HUD graphics and use a few
glTexCoord2f’s in there. (This by the way is pretty much what SDL’s
renderer does!)

But we’re going to draw another square:

    glColor3ub(0x00, 0x00, 0xff);
    glVertex3f(12.0, 12.0, 0.6);
    glVertex3f(32.0, 12.0, 0.6);
    glVertex3f(32.0, 32.0, 0.6);
    glVertex3f(12.0, 32.0, 0.6);
    glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with his
shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about depth
sorting or culling out hidden polygons for accurate rendering. (Speed is
another matter.) That goes for 2D and 3D, but you can use it in an
OpenGL-rendered 2D game to replace sprite layers. For four layers, just
use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0 zNear and zFar) and
set glDepthFunc(GL_LEQUAL) so that you can get the software-renderer-like
drawing on top of existing sprites on the same layer. Actually, you could
have as many layers as your float precision allows if you wanted them.
Just stay within zNear and zFar or the driver will clip what you draw out
as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What we
see is more like a cone. But not really quite a cone, because the surface
of our eye is not quite a point. Since our screens are rectangular, and
because computers are limited otherwise, we actually have something called
a frustum. A frustum is a geometric solid cut by two parallel planes. In
OpenGL, it’s a rectangular pyramid cut with a base at zFar and turned
sideways and cut at zNear. Anything closer than zNear is clipped, and so
is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to put X
and Y zero at the center of the screen with the eye at the origin and the
positive z axis moving away from you in the forward direction.

No code to explain this one since I couldn’t come up with a useful example
of how it’s different just out of thin air. Basically if you draw a
checkerboard tile from xMin to xMax and extend it back along the z axis,
you’ll see that the tiles aren’t strictly squares. They’re trapezoids that
get smaller as they head off toward zFar. That kind of distance distortion
is what a frustum is for because it looks totally natural until you really
start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you, but
the most common way to get them is gluPerspective or its equivalent. It
takes yFOV and an aspect ratio to calculate these things along with the now
familiar zNear and zFar. For our 640x480 window, gluPerspective might be
called thusly:

    gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s call
glFrustum ourselves:

    yFOV = 45.0;
    aspect = 640 / (GLdouble)480;
    zNear = 0.6;
    zFar = 100.0;

    yMax = tan( yFOV / 360 * M_PI ) * zNear;
    xMax = yMax * aspect;
    /* yMin and xMin are trivial */

    glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you don’t
have to recalculate the tangent or convert yFOV to radians over and over
again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do what
you think only when the current matrix is the identity. You get that by
setting glMatrixMode(GL_PROJECTION) and glLoadIdentity(). The identity
matrix is this:

    GLdouble identity[16] = {
        1.0, 0.0, 0.0, 0.0,
        0.0, 1.0, 0.0, 0.0,
        0.0, 0.0, 1.0, 0.0,
        0.0, 0.0, 0.0, 1.0  };

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

    GLdouble matrix[16] = {
        m_0, m_1, m_2, m_3,
        m_4, m_5, m_6, m_7,
        m_8, m_9, m_a, m_b,
        m_c, m_d, m_e, m_f  };

is mathematically equivalent to:

    | m_0 m_4 m_8 m_c |
    | m_1 m_5 m_9 m_d |
    | m_2 m_6 m_a m_e |
    | m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense looking at
source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what I’ll
talk about here, since if what you’re porting does that you wouldn’t be
having the black screen problem in the first place. It’s super easy to
make an ortho matrix. A frustum matrix takes a little more effort. And of
course you can do even more interesting things, but people rarely do. :slight_smile:

Hopefully that gives you some basis for debugging.

Joseph

On Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the libraries
source. None are using glFrustum. I have also do a test run on osx 10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and windows
7.
He gets the same thing as running on my xp and vista. It seems to affect
on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,

I have a game that uses OpenGL with SDL1 and recently updated to SDL2.
The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to
run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent. When
you
create a context, it’s made current automatically. Unless you have more
than one GL context, the one you created should always be current. That
shouldn’t bog your code down too much, but it’s the only difference I can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix
mode
to GL_PROJECTION and load the identity. Then create a frustum matrix.
If
you’re not doing that, you’re likely to wind up with screen the color of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values for
glFrustum, you can shave a few cycles by not doing the trig over and over
again.

This probably won’t stop your slowness problem in and of itself, but it
should get both cases above working and at approximately the same speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Um, are you using something other than SDL to set up OpenGL? If you
are using SDL’s OpenGL functions at all, SDL_INIT_VIDEO is kind of
required. That it doesn’t blow up in your face is an accident.

JosephOn Wed, Nov 06, 2013 at 05:19:56PM +0800, temp account wrote:

Fixed it. Turns out SDL_Init(SDL_INIT_EVERYTHING)
specifically SDL_INIT_VIDEO is causing the problem. Once I exclude it, the
games shows.

On Wed, Nov 6, 2013 at 3:53 AM, T. Joseph Carter < @T_Joseph_Carter> wrote:

A projection matrix tends to get set in one only a few ways in the fixed
pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to change
amongst few static projection geometries often. The code might use that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho thusly:

    glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know. And by
the way, those are floats (doubles actually), even though we’ve set it so
integer points map 1:1 to pixels. You can still specify a “half pixel” and
see it with anti-aliasing drivers. But that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on depth
testing:

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

    glBegin(GL_QUADS);
    glColor3ub(0xff, 0x00, 0x00);
    glVertex3f( 8.0,  8.0, 0.5);
    glVertex3f(24.0,  8.0, 0.5);
    glVertex3f(24.0, 24.0, 0.5);
    glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed have a
16x16 square located near the upper left corner of the screen. You can
easily see how this gets used for 2D overlays/HUDs on 3D games?just use a
white glColor and map the texture with your HUD graphics and use a few
glTexCoord2f’s in there. (This by the way is pretty much what SDL’s
renderer does!)

But we’re going to draw another square:

    glColor3ub(0x00, 0x00, 0xff);
    glVertex3f(12.0, 12.0, 0.6);
    glVertex3f(32.0, 12.0, 0.6);
    glVertex3f(32.0, 32.0, 0.6);
    glVertex3f(12.0, 32.0, 0.6);
    glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with his
shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about depth
sorting or culling out hidden polygons for accurate rendering. (Speed is
another matter.) That goes for 2D and 3D, but you can use it in an
OpenGL-rendered 2D game to replace sprite layers. For four layers, just
use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0 zNear and zFar) and
set glDepthFunc(GL_LEQUAL) so that you can get the software-renderer-like
drawing on top of existing sprites on the same layer. Actually, you could
have as many layers as your float precision allows if you wanted them.
Just stay within zNear and zFar or the driver will clip what you draw out
as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What we
see is more like a cone. But not really quite a cone, because the surface
of our eye is not quite a point. Since our screens are rectangular, and
because computers are limited otherwise, we actually have something called
a frustum. A frustum is a geometric solid cut by two parallel planes. In
OpenGL, it’s a rectangular pyramid cut with a base at zFar and turned
sideways and cut at zNear. Anything closer than zNear is clipped, and so
is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to put X
and Y zero at the center of the screen with the eye at the origin and the
positive z axis moving away from you in the forward direction.

No code to explain this one since I couldn’t come up with a useful example
of how it’s different just out of thin air. Basically if you draw a
checkerboard tile from xMin to xMax and extend it back along the z axis,
you’ll see that the tiles aren’t strictly squares. They’re trapezoids that
get smaller as they head off toward zFar. That kind of distance distortion
is what a frustum is for because it looks totally natural until you really
start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you, but
the most common way to get them is gluPerspective or its equivalent. It
takes yFOV and an aspect ratio to calculate these things along with the now
familiar zNear and zFar. For our 640x480 window, gluPerspective might be
called thusly:

    gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s call
glFrustum ourselves:

    yFOV = 45.0;
    aspect = 640 / (GLdouble)480;
    zNear = 0.6;
    zFar = 100.0;

    yMax = tan( yFOV / 360 * M_PI ) * zNear;
    xMax = yMax * aspect;
    /* yMin and xMin are trivial */

    glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you don’t
have to recalculate the tangent or convert yFOV to radians over and over
again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do what
you think only when the current matrix is the identity. You get that by
setting glMatrixMode(GL_PROJECTION) and glLoadIdentity(). The identity
matrix is this:

    GLdouble identity[16] = {
        1.0, 0.0, 0.0, 0.0,
        0.0, 1.0, 0.0, 0.0,
        0.0, 0.0, 1.0, 0.0,
        0.0, 0.0, 0.0, 1.0  };

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

    GLdouble matrix[16] = {
        m_0, m_1, m_2, m_3,
        m_4, m_5, m_6, m_7,
        m_8, m_9, m_a, m_b,
        m_c, m_d, m_e, m_f  };

is mathematically equivalent to:

    | m_0 m_4 m_8 m_c |
    | m_1 m_5 m_9 m_d |
    | m_2 m_6 m_a m_e |
    | m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense looking at
source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what I’ll
talk about here, since if what you’re porting does that you wouldn’t be
having the black screen problem in the first place. It’s super easy to
make an ortho matrix. A frustum matrix takes a little more effort. And of
course you can do even more interesting things, but people rarely do. :slight_smile:

Hopefully that gives you some basis for debugging.

Joseph

On Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the libraries
source. None are using glFrustum. I have also do a test run on osx 10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and windows
7.
He gets the same thing as running on my xp and vista. It seems to affect
on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < @T_Joseph_Carter> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,

I have a game that uses OpenGL with SDL1 and recently updated to SDL2.
The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to
run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent. When
you
create a context, it’s made current automatically. Unless you have more
than one GL context, the one you created should always be current. That
shouldn’t bog your code down too much, but it’s the only difference I can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix
mode
to GL_PROJECTION and load the identity. Then create a frustum matrix.
If
you’re not doing that, you’re likely to wind up with screen the color of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values for
glFrustum, you can shave a few cycles by not doing the trig over and over
again.

This probably won’t stop your slowness problem in and of itself, but it
should get both cases above working and at approximately the same speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I am using SDL for opengl but I am not drawing with SDL opengl functions.
When i use the windows creation code from my first post in testgl, it does
not show up on my older machines. It works fine in windows 7 and osx.On Wed, Nov 6, 2013 at 5:32 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

Um, are you using something other than SDL to set up OpenGL? If you are
using SDL’s OpenGL functions at all, SDL_INIT_VIDEO is kind of required.
That it doesn’t blow up in your face is an accident.

Joseph

On Wed, Nov 06, 2013 at 05:19:56PM +0800, temp account wrote:

Fixed it. Turns out SDL_Init(SDL_INIT_EVERYTHING)
specifically SDL_INIT_VIDEO is causing the problem. Once I exclude it, the
games shows.

On Wed, Nov 6, 2013 at 3:53 AM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

A projection matrix tends to get set in one only a few ways in the fixed

pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to change
amongst few static projection geometries often. The code might use that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho
thusly:

    glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know. And
by
the way, those are floats (doubles actually), even though we’ve set it so
integer points map 1:1 to pixels. You can still specify a "half pixel"
and
see it with anti-aliasing drivers. But that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on
depth
testing:

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

    glBegin(GL_QUADS);
    glColor3ub(0xff, 0x00, 0x00);
    glVertex3f( 8.0,  8.0, 0.5);
    glVertex3f(24.0,  8.0, 0.5);
    glVertex3f(24.0, 24.0, 0.5);
    glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed have a
16x16 square located near the upper left corner of the screen. You can
easily see how this gets used for 2D overlays/HUDs on 3D games?just use a
white glColor and map the texture with your HUD graphics and use a few
glTexCoord2f’s in there. (This by the way is pretty much what SDL’s
renderer does!)

But we’re going to draw another square:

    glColor3ub(0x00, 0x00, 0xff);
    glVertex3f(12.0, 12.0, 0.6);
    glVertex3f(32.0, 12.0, 0.6);
    glVertex3f(32.0, 32.0, 0.6);
    glVertex3f(12.0, 32.0, 0.6);
    glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with his
shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about depth
sorting or culling out hidden polygons for accurate rendering. (Speed is
another matter.) That goes for 2D and 3D, but you can use it in an
OpenGL-rendered 2D game to replace sprite layers. For four layers, just
use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0 zNear and zFar)
and
set glDepthFunc(GL_LEQUAL) so that you can get the software-renderer-like
drawing on top of existing sprites on the same layer. Actually, you
could
have as many layers as your float precision allows if you wanted them.
Just stay within zNear and zFar or the driver will clip what you draw
out
as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What we
see is more like a cone. But not really quite a cone, because the
surface
of our eye is not quite a point. Since our screens are rectangular, and
because computers are limited otherwise, we actually have something
called
a frustum. A frustum is a geometric solid cut by two parallel planes.
In
OpenGL, it’s a rectangular pyramid cut with a base at zFar and turned
sideways and cut at zNear. Anything closer than zNear is clipped, and so
is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to put X
and Y zero at the center of the screen with the eye at the origin and the
positive z axis moving away from you in the forward direction.

No code to explain this one since I couldn’t come up with a useful
example
of how it’s different just out of thin air. Basically if you draw a
checkerboard tile from xMin to xMax and extend it back along the z axis,
you’ll see that the tiles aren’t strictly squares. They’re trapezoids
that
get smaller as they head off toward zFar. That kind of distance
distortion
is what a frustum is for because it looks totally natural until you
really
start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you, but
the most common way to get them is gluPerspective or its equivalent. It
takes yFOV and an aspect ratio to calculate these things along with the
now
familiar zNear and zFar. For our 640x480 window, gluPerspective might be
called thusly:

    gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s
call
glFrustum ourselves:

    yFOV = 45.0;
    aspect = 640 / (GLdouble)480;
    zNear = 0.6;
    zFar = 100.0;

    yMax = tan( yFOV / 360 * M_PI ) * zNear;
    xMax = yMax * aspect;
    /* yMin and xMin are trivial */

    glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you don’t
have to recalculate the tangent or convert yFOV to radians over and over
again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do what
you think only when the current matrix is the identity. You get that by
setting glMatrixMode(GL_PROJECTION) and glLoadIdentity(). The identity
matrix is this:

    GLdouble identity[16] = {
        1.0, 0.0, 0.0, 0.0,
        0.0, 1.0, 0.0, 0.0,
        0.0, 0.0, 1.0, 0.0,
        0.0, 0.0, 0.0, 1.0  };

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

    GLdouble matrix[16] = {
        m_0, m_1, m_2, m_3,
        m_4, m_5, m_6, m_7,
        m_8, m_9, m_a, m_b,
        m_c, m_d, m_e, m_f  };

is mathematically equivalent to:

    | m_0 m_4 m_8 m_c |
    | m_1 m_5 m_9 m_d |
    | m_2 m_6 m_a m_e |
    | m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense looking
at
source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what
I’ll
talk about here, since if what you’re porting does that you wouldn’t be
having the black screen problem in the first place. It’s super easy to
make an ortho matrix. A frustum matrix takes a little more effort. And
of
course you can do even more interesting things, but people rarely do. :slight_smile:

Hopefully that gives you some basis for debugging.

Joseph

On Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the libraries

source. None are using glFrustum. I have also do a test run on osx
10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do
not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and windows
7.
He gets the same thing as running on my xp and vista. It seems to affect
on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,

I have a game that uses OpenGL with SDL1 and recently updated to SDL2.
The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able to
run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for
SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow,
&renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent.
When
you
create a context, it’s made current automatically. Unless you have
more
than one GL context, the one you created should always be current.
That
shouldn’t bog your code down too much, but it’s the only difference I
can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix
mode
to GL_PROJECTION and load the identity. Then create a frustum matrix.
If
you’re not doing that, you’re likely to wind up with screen the color
of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values for
glFrustum, you can shave a few cycles by not doing the trig over and
over
again.

This probably won’t stop your slowness problem in and of itself, but it
should get both cases above working and at approximately the same
speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

My mistake. It was the SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
causing it to be blank. Once I remove the code and enable SDL_INIT_VIDEO it
works. The fallback to non multi sample buffer windows does not work here I
think.On Wed, Nov 6, 2013 at 5:57 PM, temp account <@temp_account> wrote:

I am using SDL for opengl but I am not drawing with SDL opengl functions.
When i use the windows creation code from my first post in testgl, it does
not show up on my older machines. It works fine in windows 7 and osx.

On Wed, Nov 6, 2013 at 5:32 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

Um, are you using something other than SDL to set up OpenGL? If you are
using SDL’s OpenGL functions at all, SDL_INIT_VIDEO is kind of required.
That it doesn’t blow up in your face is an accident.

Joseph

On Wed, Nov 06, 2013 at 05:19:56PM +0800, temp account wrote:

Fixed it. Turns out SDL_Init(SDL_INIT_EVERYTHING)
specifically SDL_INIT_VIDEO is causing the problem. Once I exclude it,
the
games shows.

On Wed, Nov 6, 2013 at 3:53 AM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

A projection matrix tends to get set in one only a few ways in the fixed

pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to change
amongst few static projection geometries often. The code might use
that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho
thusly:

    glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know. And
by
the way, those are floats (doubles actually), even though we’ve set it
so
integer points map 1:1 to pixels. You can still specify a "half pixel"
and
see it with anti-aliasing drivers. But that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on
depth
testing:

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

    glBegin(GL_QUADS);
    glColor3ub(0xff, 0x00, 0x00);
    glVertex3f( 8.0,  8.0, 0.5);
    glVertex3f(24.0,  8.0, 0.5);
    glVertex3f(24.0, 24.0, 0.5);
    glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed have
a
16x16 square located near the upper left corner of the screen. You can
easily see how this gets used for 2D overlays/HUDs on 3D games?just use
a
white glColor and map the texture with your HUD graphics and use a few
glTexCoord2f’s in there. (This by the way is pretty much what SDL’s
renderer does!)

But we’re going to draw another square:

    glColor3ub(0x00, 0x00, 0xff);
    glVertex3f(12.0, 12.0, 0.6);
    glVertex3f(32.0, 12.0, 0.6);
    glVertex3f(32.0, 32.0, 0.6);
    glVertex3f(12.0, 32.0, 0.6);
    glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with
his
shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about depth
sorting or culling out hidden polygons for accurate rendering. (Speed
is
another matter.) That goes for 2D and 3D, but you can use it in an
OpenGL-rendered 2D game to replace sprite layers. For four layers, just
use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0 zNear and zFar)
and
set glDepthFunc(GL_LEQUAL) so that you can get the
software-renderer-like
drawing on top of existing sprites on the same layer. Actually, you
could
have as many layers as your float precision allows if you wanted them.
Just stay within zNear and zFar or the driver will clip what you draw
out
as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What we
see is more like a cone. But not really quite a cone, because the
surface
of our eye is not quite a point. Since our screens are rectangular, and
because computers are limited otherwise, we actually have something
called
a frustum. A frustum is a geometric solid cut by two parallel planes.
In
OpenGL, it’s a rectangular pyramid cut with a base at zFar and turned
sideways and cut at zNear. Anything closer than zNear is clipped, and
so
is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to put
X
and Y zero at the center of the screen with the eye at the origin and
the
positive z axis moving away from you in the forward direction.

No code to explain this one since I couldn’t come up with a useful
example
of how it’s different just out of thin air. Basically if you draw a
checkerboard tile from xMin to xMax and extend it back along the z axis,
you’ll see that the tiles aren’t strictly squares. They’re trapezoids
that
get smaller as they head off toward zFar. That kind of distance
distortion
is what a frustum is for because it looks totally natural until you
really
start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you, but
the most common way to get them is gluPerspective or its equivalent. It
takes yFOV and an aspect ratio to calculate these things along with the
now
familiar zNear and zFar. For our 640x480 window, gluPerspective might
be
called thusly:

    gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s
call
glFrustum ourselves:

    yFOV = 45.0;
    aspect = 640 / (GLdouble)480;
    zNear = 0.6;
    zFar = 100.0;

    yMax = tan( yFOV / 360 * M_PI ) * zNear;
    xMax = yMax * aspect;
    /* yMin and xMin are trivial */

    glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you don’t
have to recalculate the tangent or convert yFOV to radians over and over
again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do what
you think only when the current matrix is the identity. You get that by
setting glMatrixMode(GL_PROJECTION) and glLoadIdentity(). The identity
matrix is this:

    GLdouble identity[16] = {
        1.0, 0.0, 0.0, 0.0,
        0.0, 1.0, 0.0, 0.0,
        0.0, 0.0, 1.0, 0.0,
        0.0, 0.0, 0.0, 1.0  };

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

    GLdouble matrix[16] = {
        m_0, m_1, m_2, m_3,
        m_4, m_5, m_6, m_7,
        m_8, m_9, m_a, m_b,
        m_c, m_d, m_e, m_f  };

is mathematically equivalent to:

    | m_0 m_4 m_8 m_c |
    | m_1 m_5 m_9 m_d |
    | m_2 m_6 m_a m_e |
    | m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense looking
at
source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what
I’ll
talk about here, since if what you’re porting does that you wouldn’t be
having the black screen problem in the first place. It’s super easy to
make an ortho matrix. A frustum matrix takes a little more effort.
And of
course you can do even more interesting things, but people rarely do.
:slight_smile:

Hopefully that gives you some basis for debugging.

Joseph

On Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the libraries

source. None are using glFrustum. I have also do a test run on osx
10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do
not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and
windows
7.
He gets the same thing as running on my xp and vista. It seems to
affect
on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,

I have a game that uses OpenGL with SDL1 and recently updated to
SDL2.
The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able
to
run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for
SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow,
&renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError
());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent.
When
you
create a context, it’s made current automatically. Unless you have
more
than one GL context, the one you created should always be current.
That
shouldn’t bog your code down too much, but it’s the only difference I
can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho (and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix
mode
to GL_PROJECTION and load the identity. Then create a frustum matrix.
If
you’re not doing that, you’re likely to wind up with screen the color
of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values
for
glFrustum, you can shave a few cycles by not doing the trig over and
over
again.

This probably won’t stop your slowness problem in and of itself, but
it
should get both cases above working and at approximately the same
speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Got it solved. If you used this code to fall back,
if ((surface = SDL_SetVideoMode(w, h, 0, flags)) == NULL ) {
fprintf(stderr, “Unable to create OpenGL FSAA screen: %s\n”,
SDL_GetError());
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
if ((surface = SDL_SetVideoMode(_width * retina, _height * retina, 0,
flags)) == NULL ) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
}

it should be changed to this or something alike:
if (SDL_CreateWindowAndRenderer(_width * retina, _height * retina, flags,
&gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
// check anti-alias enabled
int value = 0;
SDL_GL_GetAttribute(SDL_GL_MULTISAMPLEBUFFERS, &value);
if (value == 0) {
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(gameWindow);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
if (SDL_CreateWindowAndRenderer(_width * retina, _height * retina, flags,
&gameWindow, &renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”, SDL_GetError ());
SDL_Quit();
exit(1);
}
}
It now works. Hope this helps. Thank you JosephOn Wed, Nov 6, 2013 at 7:04 PM, temp account <@temp_account> wrote:

My mistake. It was the SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 1);
causing it to be blank. Once I remove the code and enable SDL_INIT_VIDEO it
works. The fallback to non multi sample buffer windows does not work here I
think.

On Wed, Nov 6, 2013 at 5:57 PM, temp account <@temp_account>wrote:

I am using SDL for opengl but I am not drawing with SDL opengl functions.
When i use the windows creation code from my first post in testgl, it does
not show up on my older machines. It works fine in windows 7 and osx.

On Wed, Nov 6, 2013 at 5:32 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

Um, are you using something other than SDL to set up OpenGL? If you are
using SDL’s OpenGL functions at all, SDL_INIT_VIDEO is kind of required.
That it doesn’t blow up in your face is an accident.

Joseph

On Wed, Nov 06, 2013 at 05:19:56PM +0800, temp account wrote:

Fixed it. Turns out SDL_Init(SDL_INIT_EVERYTHING)
specifically SDL_INIT_VIDEO is causing the problem. Once I exclude it,
the
games shows.

On Wed, Nov 6, 2013 at 3:53 AM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

A projection matrix tends to get set in one only a few ways in the

fixed
pipeline:

glOrtho
glFrustum
gluPerspective (which uses glFrustum)
glLoadMatrix{df}

Of these, glLoadMatrix is going to be the fastest if you need to change
amongst few static projection geometries often. The code might use
that.

The other two common projections are orthographic and frustum. For
orthographic, let’s say we have a 640x480 window and we call glOrtho
thusly:

    glOrtho(0.0, 640.0, 480.0, 0.0, 0.1, 1.0);

That’s left, right, bottom, top, near, and far, if you don’t know.
And by
the way, those are floats (doubles actually), even though we’ve set it
so
integer points map 1:1 to pixels. You can still specify a “half
pixel” and
see it with anti-aliasing drivers. But that’s beyond the scope here.

First, let’s do a little setup by clearing the screen and turning on
depth
testing:

    glEnable(GL_DEPTH_TEST);
    glClearColor(0.0, 0.0, 0.0, 0.0);
    glClear(GL_COLOR_BIT|GL_DEPTH_BIT);

And then draw a red quad:

    glBegin(GL_QUADS);
    glColor3ub(0xff, 0x00, 0x00);
    glVertex3f( 8.0,  8.0, 0.5);
    glVertex3f(24.0,  8.0, 0.5);
    glVertex3f(24.0, 24.0, 0.5);
    glVertex3f( 8.0, 24.0, 0.5);

In a 640x480 window with the above ortho matrix, you’re guaranteed
have a
16x16 square located near the upper left corner of the screen. You can
easily see how this gets used for 2D overlays/HUDs on 3D games?just
use a
white glColor and map the texture with your HUD graphics and use a few
glTexCoord2f’s in there. (This by the way is pretty much what SDL’s
renderer does!)

But we’re going to draw another square:

    glColor3ub(0x00, 0x00, 0xff);
    glVertex3f(12.0, 12.0, 0.6);
    glVertex3f(32.0, 12.0, 0.6);
    glVertex3f(32.0, 32.0, 0.6);
    glVertex3f(12.0, 32.0, 0.6);
    glEnd(GL_QUADS);

This one’s further back (0.6), but it’s the same 16x16 square, now in
blue. But even though it was drawn second, it appears behind the red
square because we don’t want to tick off the guy who hunts bears with
his
shirt off. Or the depth test. One of those two things. :slight_smile:

The reason why both squares are exactly 16x16 is because of the
orthographic projection. It’s like looking at a box, straight down.

As long as what you’re drawing is opaque, you need not worry about
depth
sorting or culling out hidden polygons for accurate rendering. (Speed
is
another matter.) That goes for 2D and 3D, but you can use it in an
OpenGL-rendered 2D game to replace sprite layers. For four layers,
just
use 0.2, 0.4, 0.6, and 0.8 (all within the 0.1 and 1.0 zNear and zFar)
and
set glDepthFunc(GL_LEQUAL) so that you can get the
software-renderer-like
drawing on top of existing sprites on the same layer. Actually, you
could
have as many layers as your float precision allows if you wanted them.
Just stay within zNear and zFar or the driver will clip what you draw
out
as unable to be seen.

That works for 2D graphics, but it’s not how we see the world. What we
see is more like a cone. But not really quite a cone, because the
surface
of our eye is not quite a point. Since our screens are rectangular,
and
because computers are limited otherwise, we actually have something
called
a frustum. A frustum is a geometric solid cut by two parallel planes.
In
OpenGL, it’s a rectangular pyramid cut with a base at zFar and turned
sideways and cut at zNear. Anything closer than zNear is clipped, and
so
is anything drawn beyond zFar.

While we tend to define orthographic projections in terms of screen
resolution with the origin in a corner, frustum projections tend to
put X
and Y zero at the center of the screen with the eye at the origin and
the
positive z axis moving away from you in the forward direction.

No code to explain this one since I couldn’t come up with a useful
example
of how it’s different just out of thin air. Basically if you draw a
checkerboard tile from xMin to xMax and extend it back along the z
axis,
you’ll see that the tiles aren’t strictly squares. They’re trapezoids
that
get smaller as they head off toward zFar. That kind of distance
distortion
is what a frustum is for because it looks totally natural until you
really
start to think about it.

Of course, where xMin, xMax, yMin, and yMax come from is up to you, but
the most common way to get them is gluPerspective or its equivalent.
It
takes yFOV and an aspect ratio to calculate these things along with
the now
familiar zNear and zFar. For our 640x480 window, gluPerspective might
be
called thusly:

    gluPerspective(45.0, 640 / (GLdouble)480, 0.6, 100.0);

But really, pulling in glu for just this function is silly. So let’s
call
glFrustum ourselves:

    yFOV = 45.0;
    aspect = 640 / (GLdouble)480;
    zNear = 0.6;
    zFar = 100.0;

    yMax = tan( yFOV / 360 * M_PI ) * zNear;
    xMax = yMax * aspect;
    /* yMin and xMin are trivial */

    glFrustum(-xMax, xMax, -yMax, yMax, zNear, zFar);

You need only keep track of your z planes and xMax/yMax. And you don’t
have to recalculate the tangent or convert yFOV to radians over and
over
again.

The last thing to keep in mind is that glOrtho and glFrustum (and
therefore also gluPerspective) are matrix multiplications. They do
what
you think only when the current matrix is the identity. You get that
by
setting glMatrixMode(GL_PROJECTION) and glLoadIdentity(). The identity
matrix is this:

    GLdouble identity[16] = {
        1.0, 0.0, 0.0, 0.0,
        0.0, 1.0, 0.0, 0.0,
        0.0, 0.0, 1.0, 0.0,
        0.0, 0.0, 0.0, 1.0  };

Just remember that OpenGL matrices are column-major, not row. That
doesn’t matter for the identity matrix, but:

    GLdouble matrix[16] = {
        m_0, m_1, m_2, m_3,
        m_4, m_5, m_6, m_7,
        m_8, m_9, m_a, m_b,
        m_c, m_d, m_e, m_f  };

is mathematically equivalent to:

    | m_0 m_4 m_8 m_c |
    | m_1 m_5 m_9 m_d |
    | m_2 m_6 m_a m_e |
    | m_3 m_7 m_b m_f |

Direct3D is row-major, not column-major, which makes more sense
looking at
source code, but less looking at math problems.

Calculating matrices for glLoadMatrix{df} is beyond the scope of what
I’ll
talk about here, since if what you’re porting does that you wouldn’t be
having the black screen problem in the first place. It’s super easy to
make an ortho matrix. A frustum matrix takes a little more effort.
And of
course you can do even more interesting things, but people rarely do.
:slight_smile:

Hopefully that gives you some basis for debugging.

Joseph

On Tue, Nov 05, 2013 at 07:26:03PM +0800, temp account wrote:

Thanks for pointing that out. I have checked my codes and the

libraries
source. None are using glFrustum. I have also do a test run on osx
10.7.5,
windows 7 (intel graphics and nvidia optimus) and windows 8 (through
parallels desktop on mac). There are no problem running on them. I do
not
have a system with dedicated graphics card to test on xp and vista to
confirm.

My friend has tried running it on his laptop, intel graphics and
windows
7.
He gets the same thing as running on my xp and vista. It seems to
affect
on
those with intel integrated.

On Tue, Nov 5, 2013 at 4:55 PM, T. Joseph Carter < tjcarter at spiritsubstance.com> wrote:

On Tue, Nov 05, 2013 at 04:20:19PM +0800, temp account wrote:

Hi,

I have a game that uses OpenGL with SDL1 and recently updated to
SDL2.
The
game
starts up with a blank screen when using sdl renderer. If I use gl
context,
the game runs
slow. Is there anything I am doing wrong? Before, the game was able
to
run
at least 30 fps.
In addition I use SDL_Mixer and SDL_TTF and have updated them for
SDL2.
Here is my code:

if (SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr, “Unable to initialize SDL: %s\n”, SDL_GetError());
exit(1);
}
int flags = SDL_WINDOW_OPENGL;//|SDL_HWSURFACE;

SDL_GL_SetAttribute( SDL_GL_DEPTH_SIZE, 24 );
SDL_GL_SetAttribute( SDL_GL_STENCIL_SIZE, 8 );
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLEBUFFERS, 0);
SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 0);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_SetHint(SDL_HINT_FRAMEBUFFER_ACCELERATION, “1”);
SDL_SetHint(SDL_HINT_RENDER_DRIVER, “opengl”);

#ifdef USE_CONTEXT
if ((gameWindow = SDL_CreateWindow(“game”, SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, w, h, flags)) == NULL) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”,
SDL_GetError
());
SDL_Quit();
}
glcontext = SDL_GL_CreateContext(gameWindow);
#else
if (SDL_CreateWindowAndRenderer(w, h, flags, &gameWindow,
&renderer)) {
fprintf (stderr, “Unable to create OpenGL screen: %s\n”,
SDL_GetError
());
SDL_Quit();
exit(1);
}
SDL_SetWindowTitle(gameWindow, “game”);
#endif

while(running) {
runLogic();
drawFrame();
drawUI();
#ifdef USE_CONTEXT
SDL_GL_MakeCurrent(gameWindow, glcontext);
SDL_GL_SwapWindow(gameWindow);
#else
SDL_RenderPresent(renderer);
#endif
calculateFPS();
}

SDL_Quit();

Thank you in advance.

For one thing, there’s no need to constantly SDL_GL_MakeCurrent.
When
you
create a context, it’s made current automatically. Unless you have
more
than one GL context, the one you created should always be current.
That
shouldn’t bog your code down too much, but it’s the only difference
I can
see based on what you’ve shown here.

Secondly, SDL’s renderer sets up a projection matrix with glOrtho
(and
resets it upon resize of your window). If you’re trying to set up a
projection matrix with glFrustum, you’ll first want to set the matrix
mode
to GL_PROJECTION and load the identity. Then create a frustum
matrix.
If
you’re not doing that, you’re likely to wind up with screen the
color of
whatever glClearColor is.

If you’re using a variant of gluPerspective to calculate the values
for
glFrustum, you can shave a few cycles by not doing the trig over and
over
again.

This probably won’t stop your slowness problem in and of itself, but
it
should get both cases above working and at approximately the same
speed.

Joseph


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org