I have a piece of Windows and Linux code which “hands off” OpenGL
rendering from one thread to another, not in a performance critical area.
On the Linux side, I had previously coded this directly using glX calls
and got it working – the old thread would call glXMakeCurrent() with a
NULL context, and the new thread would create and make current a new
context. Fine.
I switched this code over to using SDL, which has been great overall and
has allowed me to take a lot of nasty #ifdefs out of the code. But! SDL
wisely chooses not to expose the OpenGL contexts to user code
manipulation. The calls to make the context current are within
SetVideoMode(). I changed my code over so that during the switch the old
thread does nothing, and the new thread calls SetVideoMode. This works on
Windows. On Linux, I segfault somewhere in X11_GL_MakeCurrent() – I’m
not too swift with debugging on Linux, so I haven’t narrowed it to the
offending line yet.
Anyway, I’m stuck here. I can see three solutions to my problem, and I’d
love help achieving any one of them:
- Safely expose the OpenGL context switching calls directly so I don’t
have to “go big” and call SetVideoMode() redundantly. - Find some magical combination of parameters or states that will get the
Linux implementation safely through the SetVideoMode() call. - Do the rendering hand-off between the two threads using some other
mechanism which plays nice with SDL.
Lastly, I should state that moving the rendering from one thread to
another is integral to the program – I’m well aware that real programs
should confine their rendering to one thread.
Thanks for any help you can give. I’m not a regular reader of the
newsgroup, so please direct any replies also to my address @Bradley_J_Werth
Brad Werth
@Bradley_J_Werth