NVidia OpenGL performance problems

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance is
decent (+/- 50fps) on my development machine (Windows 7, with a Radeon
graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps) on all
of the Windows machines with NVidia graphics cards that I’ve tested it on.
It is almost as if the application is being rendered entirely in software.
I’ve tried it on 3 separate machines, all with the same results.

I’ve played around with the SDL_GL_SetAttribute() function, like trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION flags to
a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them seems
to make any difference.

On the NVidia machines, glGetString(GL_VERSION) reports OpenGL version
1.1.0 (whereas on my Radeon box it reports “4.1.10600 Compatibility Profile
Context”)

Perhaps I should mention that it still uses some old OpenGL features like
glBegin(); ...; glEnd(); - It turns out my OpenGL knowledge is severely
outdated. The problem is that my deadline is approaching and since this
application is more of a demo, I don’t have the time or inclination to
rewrite it using modern OpenGL. I also didn’t think it would be a problem,
because the OpenGL implementations should be backwards compatible.
Shouldn’t they?

So, my questions:

  • Has anyone else experienced problems like this?
  • Is there an explaination for the problem? Hox can I resolve it?

On the surface it looks like a driver issue, but the machines I’ve been
testing on are used for gaming and CAD, and those applications seem fine. GL
Extensions Viewer http://www.realtech-vr.com/glview/download.php on the
one machine I’ve tested it on reports
Renderer: GeForce GTX 550 Ti/PCIe/SSE2
Vendor: NVIDIA Corporation
Memory: 0 MB
Version: 4.4.0
Shading language version: 4.40 NVIDIA via Cg compiler

So it would seem to rule out an outdated driver.

It seems that my problem is related to these issues:
http://stackoverflow.com/q/15183930/115589 and
http://gamedev.stackexchange.com/q/11533, but none of those questions seem
to have a definitive answer.

Thank you,
Werner Stoop

That would indicate that it?s using MIcrosoft?s software renderer rather than your nvidia GPU. What does the result of glGetString(GL_RENDERER) output when you run your program?On Jul 29, 2014, at 1:51 PM, Werner Stoop wrote:

On the NVidia machines, glGetString(GL_VERSION) reports OpenGL version 1.1.0 (whereas on my Radeon box it reports “4.1.10600 Compatibility Profile Context”)


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

2014-07-29 13:51 GMT-03:00 Werner Stoop :

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance is
decent (+/- 50fps) on my development machine (Windows 7, with a Radeon
graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps) on
all of the Windows machines with NVidia graphics cards that I’ve tested it
on. It is almost as if the application is being rendered entirely in
software. I’ve tried it on 3 separate machines, all with the same results.

I’ve played around with the SDL_GL_SetAttribute() function, like trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION flags to
a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them seems
to make any difference.

What context version do you get with none of this options set? Are you
loading your gl functions dynamically (using SDL_GL_GetProcAddress) or do
you link against a DLL?–
Gabriel.

Hi Gabriel,

Without setting any of those options, glGetString(GL_VERSION) reports
OpenGL version 1.1.0 on the NVidia machines.

I don’t use SDL_GL_GetProcAddress, and my linker flags include sdl2-config --libs -lopengl32 -lglu32

I’m using MinGW to compile under Windows. sdl2-config --libs evaluates to
-L/usr/local/lib -lmingw32 -lSDL2main -lSDL2 -mwindows

Thanks,
Werner

(Sorry for replying from the mailing list digest - I haven’t received your
original response yet. Could be related to when I joined the mailing list)Date: Tue, 29 Jul 2014 14:28:44 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfesmP80ggx1w29=
TwjJ7Q_Ot98xR34CbE-rhYrmz4HLn92w at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

2014-07-29 13:51 GMT-03:00 Werner Stoop <@Werner_Stoop>:

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance is
decent (+/- 50fps) on my development machine (Windows 7, with a Radeon
graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps) on
all of the Windows machines with NVidia graphics cards that I’ve tested
it
on. It is almost as if the application is being rendered entirely in
software. I’ve tried it on 3 separate machines, all with the same
results.

I’ve played around with the SDL_GL_SetAttribute() function, like trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION flags
to
a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them
seems
to make any difference.

What context version do you get with none of this options set? Are you
loading your gl functions dynamically (using SDL_GL_GetProcAddress) or do
you link against a DLL?


Gabriel.


I’m not 100% certain about what’s going on, but I would try doing a test
and load glGetString with SDL_GL_GetProcAddress, without linking to
opengl32 and see what you get from that.

Other than that…is the nVidia card the only card on the computer?

2014-07-29 14:54 GMT-03:00 Werner Stoop :> Hi Gabriel,

Without setting any of those options, glGetString(GL_VERSION) reports
OpenGL version 1.1.0 on the NVidia machines.

I don’t use SDL_GL_GetProcAddress, and my linker flags include
sdl2-config --libs -lopengl32 -lglu32

I’m using MinGW to compile under Windows. sdl2-config --libs evaluates
to -L/usr/local/lib -lmingw32 -lSDL2main -lSDL2 -mwindows

Thanks,
Werner

(Sorry for replying from the mailing list digest - I haven’t received your
original response yet. Could be related to when I joined the mailing list)

Date: Tue, 29 Jul 2014 14:28:44 -0300

From: Gabriel Jacobo <@Gabriel_Jacobo>
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfesmP80ggx1w29=
TwjJ7Q_Ot98xR34CbE-rhYrmz4HLn92w at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

2014-07-29 13:51 GMT-03:00 Werner Stoop :

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance is
decent (+/- 50fps) on my development machine (Windows 7, with a Radeon
graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps) on
all of the Windows machines with NVidia graphics cards that I’ve tested
it
on. It is almost as if the application is being rendered entirely in
software. I’ve tried it on 3 separate machines, all with the same
results.

I’ve played around with the SDL_GL_SetAttribute() function, like
trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION flags
to
a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them
seems
to make any difference.

What context version do you get with none of this options set? Are you
loading your gl functions dynamically (using SDL_GL_GetProcAddress) or do
you link against a DLL?


Gabriel.



SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Gabriel.

Hi Gabriel,

Yes, the nVidia card is the only one in the machine.

I’ll try to do what you propose, but since the nvidia machine is not my
own, it will take a while to get back to you on that.

As I’ve told Alex in the other branch of this thread it seems that the
application is indeed using Microsoft’s software OpenGL renderer, but I’m
stumped as to how I convince it to use the GPU instead.

Thanks,
WernerDate: Tue, 29 Jul 2014 15:05:29 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfes=SU0B4o3p4qLz3AjyCBfD7j1urpLfMN7gZGQ=
FJNG5Wg at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

I’m not 100% certain about what’s going on, but I would try doing a test
and load glGetString with SDL_GL_GetProcAddress, without linking to
opengl32 and see what you get from that.

Other than that…is the nVidia card the only card on the computer?

2014-07-29 14:54 GMT-03:00 Werner Stoop <@Werner_Stoop>:

Hi Gabriel,

Without setting any of those options, glGetString(GL_VERSION) reports
OpenGL version 1.1.0 on the NVidia machines.

I don’t use SDL_GL_GetProcAddress, and my linker flags include
sdl2-config --libs -lopengl32 -lglu32

I’m using MinGW to compile under Windows. sdl2-config --libs evaluates
to -L/usr/local/lib -lmingw32 -lSDL2main -lSDL2 -mwindows

Thanks,
Werner

(Sorry for replying from the mailing list digest - I haven’t received
your
original response yet. Could be related to when I joined the mailing
list)

Date: Tue, 29 Jul 2014 14:28:44 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfesmP80ggx1w29=
TwjJ7Q_Ot98xR34CbE-rhYrmz4HLn92w at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

2014-07-29 13:51 GMT-03:00 Werner Stoop <@Werner_Stoop>:

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance is
decent (+/- 50fps) on my development machine (Windows 7, with a Radeon
graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps)
on

all of the Windows machines with NVidia graphics cards that I’ve
tested

it

on. It is almost as if the application is being rendered entirely in
software. I’ve tried it on 3 separate machines, all with the same
results.

I’ve played around with the SDL_GL_SetAttribute() function, like
trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION
flags

to

a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them
seems
to make any difference.

What context version do you get with none of this options set? Are you
loading your gl functions dynamically (using SDL_GL_GetProcAddress) or
do

you link against a DLL?


Gabriel.

Can you please post the complete list of SDL_GL_Attributes you are setting?

One possibility is that you are requesting a pixel format that is not
hardware accelerated on that specific GPU / drivers.

2014-07-29 23:13 GMT+02:00 Werner Stoop :> Hi Gabriel,

Yes, the nVidia card is the only one in the machine.

I’ll try to do what you propose, but since the nvidia machine is not my
own, it will take a while to get back to you on that.

As I’ve told Alex in the other branch of this thread it seems that the
application is indeed using Microsoft’s software OpenGL renderer, but I’m
stumped as to how I convince it to use the GPU instead.

Thanks,
Werner

Date: Tue, 29 Jul 2014 15:05:29 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfes=SU0B4o3p4qLz3AjyCBfD7j1urpLfMN7gZGQ=
FJNG5Wg at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

I’m not 100% certain about what’s going on, but I would try doing a test
and load glGetString with SDL_GL_GetProcAddress, without linking to
opengl32 and see what you get from that.

Other than that…is the nVidia card the only card on the computer?

2014-07-29 14:54 GMT-03:00 Werner Stoop :

Hi Gabriel,

Without setting any of those options, glGetString(GL_VERSION) reports
OpenGL version 1.1.0 on the NVidia machines.

I don’t use SDL_GL_GetProcAddress, and my linker flags include
sdl2-config --libs -lopengl32 -lglu32

I’m using MinGW to compile under Windows. sdl2-config --libs evaluates
to -L/usr/local/lib -lmingw32 -lSDL2main -lSDL2 -mwindows

Thanks,
Werner

(Sorry for replying from the mailing list digest - I haven’t received
your
original response yet. Could be related to when I joined the mailing
list)

Date: Tue, 29 Jul 2014 14:28:44 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfesmP80ggx1w29=
TwjJ7Q_Ot98xR34CbE-rhYrmz4HLn92w at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

2014-07-29 13:51 GMT-03:00 Werner Stoop :

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance
is

decent (+/- 50fps) on my development machine (Windows 7, with a
Radeon

graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3 fps)
on

all of the Windows machines with NVidia graphics cards that I’ve
tested

it

on. It is almost as if the application is being rendered entirely in
software. I’ve tried it on 3 separate machines, all with the same
results.

I’ve played around with the SDL_GL_SetAttribute() function, like
trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION
flags

to

a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of them
seems
to make any difference.

What context version do you get with none of this options set? Are you
loading your gl functions dynamically (using SDL_GL_GetProcAddress) or
do

you link against a DLL?


Gabriel.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi Stefanos,

Here is the first part of my main() function that initialises OpenGL (I’ve
snipped all the error checking stuff). The only SDL_GL_SetAttribute() calls
I make are the ones for SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you
think that may be the cause?

I’ve played around with other parameters (SDL_GL_CONTEXT_PROFILE_MASK,
SDL_GL_CONTEXT_MAJOR_VERSION, SDL_GL_CONTEXT_MINOR_VERSION and
SDL_GL_ACCELERATED_VISUAL), but I’ve removed them since none of them seemed
to make a difference.-------
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_AUDIO) < 0)
sdldie(“Unable to initialize SDL”);

SDL_Window *screen;
if(fullscreen) {
    screen = SDL_CreateWindow("Gooey",
        SDL_WINDOWPOS_CENTERED,
        SDL_WINDOWPOS_CENTERED,
        screen_width, screen_height,
        SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN |

SDL_WINDOW_FULLSCREEN_DESKTOP);
} else {
screen = SDL_CreateWindow(“Gooey”,
SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED,
screen_width, screen_height,
SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE);
}

SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 32);

SDL_GLContext maincontext = SDL_GL_CreateContext(screen);

SDL_GL_SetSwapInterval(1);

glShadeModel(GL_SMOOTH);

glCullFace(GL_BACK);
glFrontFace(GL_CCW);

glEnable(GL_TEXTURE_2D);
glClearColor(0.5,0.5,0.5,0);

float ratio = (float)width/(float)height;
glViewport(0,0,width,height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, ratio, 0.01, 1024.0);

Thank you,
Werner

Date: Tue, 29 Jul 2014 23:26:02 +0200

From: “Stefanos A.”
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAJysdvqiXc=08YbGK8_q+9DD0NAAGu=
YiByCFAKQGKfjvDYsrg at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

Can you please post the complete list of SDL_GL_Attributes you are setting?

One possibility is that you are requesting a pixel format that is not
hardware accelerated on that specific GPU / drivers.

2014-07-29 23:13 GMT+02:00 Werner Stoop <@Werner_Stoop>:

Hi Gabriel,

Yes, the nVidia card is the only one in the machine.

I’ll try to do what you propose, but since the nvidia machine is not my
own, it will take a while to get back to you on that.

As I’ve told Alex in the other branch of this thread it seems that the
application is indeed using Microsoft’s software OpenGL renderer, but I’m
stumped as to how I convince it to use the GPU instead.

Thanks,
Werner

Date: Tue, 29 Jul 2014 15:05:29 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfes=SU0B4o3p4qLz3AjyCBfD7j1urpLfMN7gZGQ=
FJNG5Wg at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

I’m not 100% certain about what’s going on, but I would try doing a test
and load glGetString with SDL_GL_GetProcAddress, without linking to
opengl32 and see what you get from that.

Other than that…is the nVidia card the only card on the computer?

2014-07-29 14:54 GMT-03:00 Werner Stoop <@Werner_Stoop>:

Hi Gabriel,

Without setting any of those options, glGetString(GL_VERSION) reports
OpenGL version 1.1.0 on the NVidia machines.

I don’t use SDL_GL_GetProcAddress, and my linker flags include
sdl2-config --libs -lopengl32 -lglu32

I’m using MinGW to compile under Windows. sdl2-config --libs
evaluates

to -L/usr/local/lib -lmingw32 -lSDL2main -lSDL2 -mwindows

Thanks,
Werner

(Sorry for replying from the mailing list digest - I haven’t received
your
original response yet. Could be related to when I joined the mailing
list)

Date: Tue, 29 Jul 2014 14:28:44 -0300

From: Gabriel Jacobo
To: SDL Development List
Subject: Re: [SDL] NVidia OpenGL performance problems
Message-ID:
<CAKDfesmP80ggx1w29=
TwjJ7Q_Ot98xR34CbE-rhYrmz4HLn92w at mail.gmail.com>
Content-Type: text/plain; charset=“utf-8”

2014-07-29 13:51 GMT-03:00 Werner Stoop <@Werner_Stoop>:

Dear SDL Users,

I have written an OpenGL application in SDL 2.0 and the performance
is

decent (+/- 50fps) on my development machine (Windows 7, with a
Radeon

graphics card and Catalyst drivers).

The problem is that the performance is almost non-existent (1/3
fps)

on

all of the Windows machines with NVidia graphics cards that I’ve
tested

it

on. It is almost as if the application is being rendered entirely
in

software. I’ve tried it on 3 separate machines, all with the same
results.

I’ve played around with the SDL_GL_SetAttribute() function, like
trying
to set SDL_GL_CONTEXT_PROFILE_MASK to
SDL_GL_CONTEXT_PROFILE_COMPATIBILITY and setting the
SDL_GL_CONTEXT_MAJOR_VERSION and SDL_GL_CONTEXT_MINOR_VERSION
flags

to

a variety of values, and calling
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1); but none of
them

seems

to make any difference.

What context version do you get with none of this options set? Are
you

loading your gl functions dynamically (using SDL_GL_GetProcAddress)
or

do

you link against a DLL?


Gabriel.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

-------------- next part --------------
An HTML attachment was scrubbed…
URL: <
http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20140729/46f58882/attachment-0001.htm

You might want to try asking for a 24-bit depth buffer. Most desktop hardware will prefer that, I think.

Looking at Apple?s capability chart[1] (which can sometimes give an idea of what specific hardware supports), it seems like Intel and nvidia GPUs don?t have native 32-bit depth buffer modes.

[1]: https://developer.apple.com/graphicsimaging/opengl/capabilities/On Jul 30, 2014, at 2:28 PM, Werner Stoop wrote:

Here is the first part of my main() function that initialises OpenGL (I’ve snipped all the error checking stuff). The only SDL_GL_SetAttribute() calls I make are the ones for SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait for my
friend with his NVidia card to test it for me.

Regards,
WernerOn Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give an idea
of what specific hardware supports), it seems like Intel and nvidia GPUs
don?t have native 32-bit depth buffer modes.

On Jul 30, 2014, at 2:28 PM, Werner Stoop <@Werner_Stoop> wrote:

Here is the first part of my main() function that initialises OpenGL (I’ve
snipped all the error checking stuff). The only SDL_GL_SetAttribute() calls
I make are the ones for SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you
think that may be the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve set
the GL_DEPTH_SIZE to 24.

I’ve pasted the code here: http://pastebin.com/PsQiZwQT

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
WernerOn Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop <@Werner_Stoop> wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait for my
friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give an idea
of what specific hardware supports), it seems like Intel and nvidia GPUs
don?t have native 32-bit depth buffer modes.

On Jul 30, 2014, at 2:28 PM, Werner Stoop <@Werner_Stoop> wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only SDL_GL_SetAttribute()
calls I make are the ones for SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do
you think that may be the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Someone suggested to use SDL_GL_GetProcAddress to retrieve the
glGetString function, did you try that?Am 31.07.2014 14:31, schrieb Werner Stoop:

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve
set the GL_DEPTH_SIZE to 24.?

I’ve pasted the code here: http://pastebin.com/PsQiZwQT [3]

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait
for my friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give
an idea of what specific hardware supports), it seems like Intel and
nvidia GPUs don?t have native 32-bit depth buffer modes.

1

On Jul 30, 2014, at 2:28 PM, Werner Stoop wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only
SDL_GL_SetAttribute() calls I make are the ones for
SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be
the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]

Links:

1 https://developer.apple.com/graphicsimaging/opengl/capabilities/
[2] http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
[3] http://pastebin.com/PsQiZwQT


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

glGetString() is core OpenGL 1.1, so there’s no need to do anything special.

I use a depth buffer size of 16 bits, but I wouldn’t expect that to be the
problem. It seems much more like a driver issue. I would double-check
that the drivers are up to date. Does GL Extensions Viewer or some other
tool give you a mode list, like glxinfo does? Maybe try explicitly
specifying the color buffer depth?

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

Jonny DOn Thu, Jul 31, 2014 at 10:09 AM, Robotic-Brain wrote:

Someone suggested to use SDL_GL_GetProcAddress to retrieve the glGetString
function, did you try that?

Am 31.07.2014 14:31, schrieb Werner Stoop:

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve
set the GL_DEPTH_SIZE to 24.

I’ve pasted the code here: http://pastebin.com/PsQiZwQT [3]

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait
for my friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give
an idea of what specific hardware supports), it seems like Intel and
nvidia GPUs don?t have native 32-bit depth buffer modes.

1

On Jul 30, 2014, at 2:28 PM, Werner Stoop wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only
SDL_GL_SetAttribute() calls I make are the ones for
SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be
the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]

Links:

1 https://developer.apple.com/graphicsimaging/opengl/capabilities/
[2] http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
[3] http://pastebin.com/PsQiZwQT


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

FWIW: http://www.opengl.org/wiki/FAQ

Why is my GL version only 1.4 or lower?

There are three reasons you may get an unexpectedly low OpenGL version.

On Windows, you might get a low GL version if, during context creation
http://www.opengl.org/wiki/Creating_an_OpenGL_Context, you use an
unaccelerated pixel format. This means you get the default implementation
of OpenGL which is version 1.1.

The solution to this is to be more careful in your pixel format selection.
More information can be found atPlatform_specifics:_Windows
http://www.opengl.org/wiki/Platform_specifics:_Windows and other parts of
the Wiki.

The other reason is that the makers of your video card (and therefore the
makers of your video drivers) do not provide an up-to-date OpenGL
implementation. There are a number of defunct graphics card vendors out
there. However, of the non-defunct ones, this is most likely to happen with
Intel’s integrated GPUs.

Intel does not provide a proper, up-to-date OpenGL implementation for their
integrated GPUs. There is nothing that can be done about this. NVIDIA and
ATI provide good support for their integrated GPUs.

Another reason is that you haven’t installed your video card drivers after
installing your OS.

Be sure to query OpenGL with glGetString
http://www.opengl.org/wiki/GLAPI/glGetString? and make sure the returned
values make sense.

2014-07-31 11:35 GMT-03:00 Jonathan Dearborn :> glGetString() is core OpenGL 1.1, so there’s no need to do anything

special.

I use a depth buffer size of 16 bits, but I wouldn’t expect that to be the
problem. It seems much more like a driver issue. I would double-check
that the drivers are up to date. Does GL Extensions Viewer or some other
tool give you a mode list, like glxinfo does? Maybe try explicitly
specifying the color buffer depth?

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

Jonny D

On Thu, Jul 31, 2014 at 10:09 AM, Robotic-Brain wrote:

Someone suggested to use SDL_GL_GetProcAddress to retrieve the
glGetString function, did you try that?

Am 31.07.2014 14:31, schrieb Werner Stoop:

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve
set the GL_DEPTH_SIZE to 24.

I’ve pasted the code here: http://pastebin.com/PsQiZwQT [3]

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait
for my friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give
an idea of what specific hardware supports), it seems like Intel and
nvidia GPUs don?t have native 32-bit depth buffer modes.

1

On Jul 30, 2014, at 2:28 PM, Werner Stoop wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only
SDL_GL_SetAttribute() calls I make are the ones for
SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be
the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]

Links:

1 https://developer.apple.com/graphicsimaging/opengl/capabilities/
[2] http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
[3] http://pastebin.com/PsQiZwQT


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


Gabriel.

Oh, also… Try setting the GL attributes before you create the window.

Jonny DOn Thu, Jul 31, 2014 at 10:35 AM, Jonathan Dearborn <@Jonathan_Dearborn> wrote:

glGetString() is core OpenGL 1.1, so there’s no need to do anything
special.

I use a depth buffer size of 16 bits, but I wouldn’t expect that to be the
problem. It seems much more like a driver issue. I would double-check
that the drivers are up to date. Does GL Extensions Viewer or some other
tool give you a mode list, like glxinfo does? Maybe try explicitly
specifying the color buffer depth?

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

Jonny D

On Thu, Jul 31, 2014 at 10:09 AM, Robotic-Brain wrote:

Someone suggested to use SDL_GL_GetProcAddress to retrieve the
glGetString function, did you try that?

Am 31.07.2014 14:31, schrieb Werner Stoop:

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve
set the GL_DEPTH_SIZE to 24.

I’ve pasted the code here: http://pastebin.com/PsQiZwQT [3]

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait
for my friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give
an idea of what specific hardware supports), it seems like Intel and
nvidia GPUs don?t have native 32-bit depth buffer modes.

1

On Jul 30, 2014, at 2:28 PM, Werner Stoop wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only
SDL_GL_SetAttribute() calls I make are the ones for
SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be
the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]

Links:

1 https://developer.apple.com/graphicsimaging/opengl/capabilities/
[2] http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
[3] http://pastebin.com/PsQiZwQT


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hi all,

I’ve modified the code I shared earlier to set the GL attributes before I
create the window as per Jonny’s suggestion, and now it reports that it is
using the GeForce GTX as renderer. I’m modifying my main application to do
the same, and will ask my friend to test it again with his GeForce.

Now my next question is whether this should perhaps be logged as an issue?
That when SDL is initialised or when SDL_CreateWindow() is called with
SDL_WINDOW_OPENGL it would set the default parameters such that it would
prefer the NVidia driver over the GDI renderer? I still can’t explain why
it used my Radeon renderer by default, but not the NVidia one.

Anyway, the code I posted earlier now looks like the following:

if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_TIMER | SDL_INIT_AUDIO) < 0) {
fprintf(logfile, “error: Unable to initialize SDL: %s”,
SDL_GetError());
SDL_Quit();
exit(1);
}

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

screen = SDL_CreateWindow("SDL Nvidia GL Test",
    SDL_WINDOWPOS_CENTERED,
    SDL_WINDOWPOS_CENTERED,
    screen_width, screen_height,
    SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE);
if(!screen) {
    fprintf(logfile, "error: Unable to create main window: %s\n",

SDL_GetError());
SDL_Quit();
exit(1);
}

Thank you everyone,
WernerOn Thu, Jul 31, 2014 at 4:37 PM, Jonathan Dearborn wrote:

Oh, also… Try setting the GL attributes before you create the window.

Jonny D

On Thu, Jul 31, 2014 at 10:35 AM, Jonathan Dearborn wrote:

glGetString() is core OpenGL 1.1, so there’s no need to do anything
special.

I use a depth buffer size of 16 bits, but I wouldn’t expect that to be
the problem. It seems much more like a driver issue. I would double-check
that the drivers are up to date. Does GL Extensions Viewer or some other
tool give you a mode list, like glxinfo does? Maybe try explicitly
specifying the color buffer depth?

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 16);

SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);

Jonny D

On Thu, Jul 31, 2014 at 10:09 AM, Robotic-Brain wrote:

Someone suggested to use SDL_GL_GetProcAddress to retrieve the
glGetString function, did you try that?

Am 31.07.2014 14:31, schrieb Werner Stoop:

Hi Alex,

My friend tested the application for me on his NVidia machine.
Unfortunately I still get the “GDI Generic” renderer.

I’ve distilled the source code of my application to just the bare
essentials that creates a SDL screen and then an OpenGL context. I’ve
set the GL_DEPTH_SIZE to 24.

I’ve pasted the code here: http://pastebin.com/PsQiZwQT [3]

The output of this program is:

info: SDL_GL_DEPTH_SIZE: 24
info: SDL version 2.0.1 (compile)
info: SDL version 2.0.1 (link)
info: OpenGL version: 1.1.0
info: OpenGL renderer: GDI Generic
error: SDL: nvgltest.c:83: That operation is not supported
info: setup view: 640 x 480

whereas on my Radeon machine it gets set correctly.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:48 PM, Werner Stoop <@Werner_Stoop> wrote:

Hi Alex,

Wow, thanks. I’ve recompiled the application, but I’ll have to wait
for my friend with his NVidia card to test it for me.

Regards,
Werner

On Wed, Jul 30, 2014 at 7:39 PM, Alex Szpakowski wrote:

You might want to try asking for a 24-bit depth buffer. Most desktop
hardware will prefer that, I think.

Looking at Apple?s capability chart1 (which can sometimes give
an idea of what specific hardware supports), it seems like Intel and
nvidia GPUs don?t have native 32-bit depth buffer modes.

1

On Jul 30, 2014, at 2:28 PM, Werner Stoop <@Werner_Stoop> wrote:

Here is the first part of my main() function that initialises OpenGL
(I’ve snipped all the error checking stuff). The only
SDL_GL_SetAttribute() calls I make are the ones for
SDL_GL_DOUBLEBUFFER and SDL_GL_DEPTH_SIZE. Do you think that may be
the cause?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org [2]

Links:

1 https://developer.apple.com/graphicsimaging/opengl/capabilities/
[2] http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
[3] http://pastebin.com/PsQiZwQT


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org