Some pieces of SDL 1.3 win32 video code that look wrong to me


when tracking down a few problems (including bug #605) I had with current SDL
1.3 (last commit revision 3675), a number of details caught my attention.
However I’m not sure any or all of them are bugs, so I would like to discuss
them here.

SDL_win32opengl.c, WIN_GL_SetupWindow:======================================

if (_this->gl_config.accelerated >= 0) {
*iAttr++ =

Looks like the attribute WGL_ACCELERATION_ARB might be inserted twice, with
contradictory values.

SDL_win32modes.c, WIN_GetDisplayMode:

     GetDIBits(hdc, hbm, 0, 1, NULL, bmi, DIB_RGB_COLORS);
     GetDIBits(hdc, hbm, 0, 1, NULL, bmi, DIB_RGB_COLORS);

The line appears twice (as shown :)). If that’s intended, please add a comment.

SDL_win32window.c, WIN_CreateWindow:

 if (window->flags & SDL_WINDOW_FULLSCREEN) {
     top = HWND_TOPMOST;
 } else {
     top = HWND_NOTOPMOST;

The variable top does not seem to be used afterwards. The same few lines appear
in other functions, so I guess it’s a leftover of some copy and paste action.

SDL_win32window.c, SetupWindowData:

 /* Set up the window proc function */
 data->wndproc = (WNDPROC) GetWindowLongPtr(hwnd, GWLP_WNDPROC);
 if (data->wndproc == NULL) {
     data->wndproc = DefWindowProc;
 } else {
     SetWindowLongPtr(hwnd, GWLP_WNDPROC, (LONG_PTR) WIN_WindowProc);

So, WIN_WindowProc will replace any window procedure, but only if the window
really had one. Otherwise, the window will continue to have no window procedure.
I guess that WIN_WindowProc should always be set, no matter what the previous
WNDPROC was. The test for NULL still makes sense, because WIN_WindowProc will
occasionally call data->wndproc.

SDL_win32events.c, SDL_RegisterApp:

 if (!name && !SDL_Appname) {
     name = "SDL_app";
     SDL_Instance = hInst ? hInst : GetModuleHandle(NULL);

 if (name) {
     SDL_Appname = WIN_UTF8ToString(name);
     SDL_Appstyle = style;
     SDL_Instance = hInst ? hInst : GetModuleHandle(NULL);

Assigning name inside the first if-block causes the second if-block to be
entered, too. Hence the assignment of SDL_AppStyle/SDL_Instance in the first
block is useless, because the second block will inevitably replace/repeat it.
That is, the code has the same result as:
if (!name && !SDL_Appname) {
name = “SDL_app”;

 if (name) {
     SDL_Appname = WIN_UTF8ToString(name);
     SDL_Appstyle = style;
     SDL_Instance = hInst ? hInst : GetModuleHandle(NULL);

But I don’t think that was intended.
I suggest:
if (name) {
SDL_Appname = WIN_UTF8ToString(name);
} else if (!SDL_Appname) {
SDL_Appname = WIN_UTF8ToString(“SDL_app”);

 if (style) {
     SDL_Appstyle = style;
 } else if (!SDL_Appstyle) {

 if (hInst) {
     SDL_Instance = hInst;
 } else if (!SDL_Instance) {
     SDL_Instance = GetModuleHandle(NULL);

SDL_win32opengl.h, struct SDL_GLDriverData:

SDL caches some GL function addresses in the SDL_GLDriverData (part of the video
driver object). The driver seems to be a global object, so there is only one set
of addresses being cached.

I’m not sure if it matters, but according to MSDN, wglGetProcAddress may return
different addresses for equal function names in different GL contexts (more
specifically, contexts with different pixel formats).
What if an SDL program tries to render to windows with different pixel formats?

By accident, I already prove that it is possible to activate different OpenGL
implementations by choosing different pixel formats - see the next case.

SDL_win32opengl.c, WIN_GL_ChoosePixelFormatARB:

 int pixel_format = 0;

SetPixelFormat(hdc, ChoosePixelFormat(hdc, &pfd), &pfd);

_this->gl_data->wglChoosePixelFormatARB(hdc, iAttribs, fAttribs,
1, &pixel_format,

return pixel_format;

My program told me that glDrawRangeElements was unsupported. This is what I
found to be the reason:

I had set SDL_GL_DEPTH_SIZE to 32, but the hardware ICD only supports 24 depth
bits (which kind of suprised me, but it still seems sufficient).
ChoosePixelFormat picked a format supported by the hardware ICD, reducing the
depth size as required.

wglChoosePixelFormatARB was less tolerant and found no match (matching==0,
pixel_format unchanged). So WIN_GL_ChoosePixelFormatARB returned 0.

SDL then fell back to looking through all formats on its own
(WIN_GL_ChoosePixelFormat). The result was a perfect match for all the
attributes, but supported only by MS’ “GDI Generic” software OpenGL
implementation. Which does not provide glDrawRangeElements, apart from not using
my hardware.

I solved the problem by adjusting SDL_GL_DEPTH_SIZE to 24. But I wonder what
will happen if some hardware only supports 16 bits. How could one tell SDL to
use a smaller depth buffer rather than falling back to software OpenGL? (And can
SDL find out whether a pixel format is supported by the hardware ICD?)

During my “investigation”, I tried this little change in the code above:
SetPixelFormat(hdc, pixel_format = ChoosePixelFormat(hdc, &pfd), &pfd);

This way, ChoosePixelFormat’s “hardware-oriented” choice is accepted if
wglChoosePixelFormatARB does not find a (better?) match. What do you think about it?