OpenGL and Multithreading

Hi.

I does not seam that I can create a GL-window in one thread an draw to it using
gl functions in another, is this true, and why? Memory is shared between
threads and so the gl function should have access to the gl state?

Cya, Popi.

This is just a shot in the dark…
But my guess if that if you use SDL to create the window that GL draws in
tthen that SDL_Surface* is NOT known by the thread, and thus… ehh… But
wait, normal gl calls should still work… I think… Hmmm, I can’t seem to
decide myself now :wink:

Just ignore this post… I just shot myself in the foot…

Best regards
Daniel Liljeberg> ----- Original Message -----

From: pontus.pihlgren.5501@student.uu.se (Pontus Pihlgren)
To:
Sent: Saturday, October 04, 2003 4:06 PM
Subject: [SDL] OpenGL and Multithreading

Hi.

I does not seam that I can create a GL-window in one thread an draw to it
using
gl functions in another, is this true, and why? Memory is shared between
threads and so the gl function should have access to the gl state?

Cya, Popi.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Windows requires that the OpenGL context be selected into the thread
before rendering, and a context can only be selected into one thread at
a time.

I don’t know about other archs.On Sat, Oct 04, 2003 at 04:06:57PM +0200, Pontus Pihlgren wrote:

I does not seam that I can create a GL-window in one thread an draw to it using
gl functions in another, is this true, and why? Memory is shared between
threads and so the gl function should have access to the gl state?


Glenn Maynard

It is OpenGL that is the restricting factor the single library can
handle multiple threads, but every thread have to use it’s own OpenGL
Graphics Contest (gc). Ie the GC’s are not thread safe.

And if you put a litle bit more thought to it you understand why…
… oh but of course those rendering calls couldn’t have any guaranteed
order of execution, because there is no guaranteed order of execution
in the threads.

I think one could hack something, because it of course is possible to
restrict the usage of GC to be only one thread a time (mutex).
In that case there really is an alternative route, which only uses
single rendering thread and it most likely is also faster.On Saturday 04 October 2003 21:09, Glenn Maynard wrote:

On Sat, Oct 04, 2003 at 04:06:57PM +0200, Pontus Pihlgren wrote:

I does not seam that I can create a GL-window in one thread an draw
to it using gl functions in another, is this true, and why? Memory
is shared between threads and so the gl function should have access
to the gl state?

Windows requires that the OpenGL context be selected into the thread
before rendering, and a context can only be selected into one thread
at a time.

I don’t know about other archs.

And if you put a litle bit more thought to it you understand why…
… oh but of course those rendering calls couldn’t have any guaranteed
order of execution, because there is no guaranteed order of execution
in the threads.

That isn’t the problem.

If my code guarantees serialization (don’t call OpenGL from two threads
at once on the same context), OpenGL doesn’t have to.

The issue is that it goes beyond that: if you call OpenGL from a
different thread, it simply won’t work; and if you try to select a
context into two threads, it’ll refuse.

This is incredibly annoying. I decode a movie in a thread. I then send
it to a texture. However, I can’t simply lock a mutex and upload it; I
have to stick it in a buffer, set a flag and then have the main
rendering thread upload it when it notices the flag. Regardless of the
fact that I could serialize access myself, it won’t let me do it.

I think one could hack something, because it of course is possible to
restrict the usage of GC to be only one thread a time (mutex).

Nope, this won’t work (at least not on my W2k, Geforce 2 system).On Sun, Oct 05, 2003 at 02:19:14AM +0300, Sami N??t?nen wrote:


Glenn Maynard

Yeah most opengl libs deny this from the users, so that threads can’t
mess with each other.

Your problem has an alternative and I would say more efective route.
don’t bind the decoded stuff to texture in decode thread.

You should set 2-n decode buffers and control the use of the buffers
with mutex.
When the decode thread has finished decoding one frame it will grap the
mutex and when given it will add the decoded buffer to the end of ready
buffer list and release the mutex.

The redering thread will take decoded buffers from the ready list by
grapping the mutex and getting the first ie oldest frame from the list
and then free the mutex.

This will ensure that each thread uses as litle time as possible in the
mutex thus allowing the other thread maximal none blocking operation.

The amount of buffers debends from the amount of time used by the
decoder and of course the needed disk activity. But you can experiment
with that.

This is a classical server client model and reading some papers or good
books about parallel programming can give enlighting ways to do this
efectively. We used Gregory G. Andrews “Foundations of multithreaded,
parallel and distributed programming” in our uni course, and it is good
and clearly explanes everything. Although not having C examples, but
the normal syntaxes used in parallel computing.

So when you start the prog you can first set the decode thread to lock
all the mutices. And start to fill the buffers.On Sunday 05 October 2003 03:02, Glenn Maynard wrote:

On Sun, Oct 05, 2003 at 02:19:14AM +0300, Sami N??t?nen wrote:

And if you put a litle bit more thought to it you understand why…
… oh but of course those rendering calls couldn’t have any
guaranteed order of execution, because there is no guaranteed order
of execution in the threads.

That isn’t the problem.

If my code guarantees serialization (don’t call OpenGL from two
threads at once on the same context), OpenGL doesn’t have to.

The issue is that it goes beyond that: if you call OpenGL from a
different thread, it simply won’t work; and if you try to select a
context into two threads, it’ll refuse.

This is incredibly annoying. I decode a movie in a thread. I then
send it to a texture. However, I can’t simply lock a mutex and
upload it; I have to stick it in a buffer, set a flag and then have
the main rendering thread upload it when it notices the flag.
Regardless of the fact that I could serialize access myself, it won’t
let me do it.

I think one could hack something, because it of course is possible
to restrict the usage of GC to be only one thread a time (mutex).

Nope, this won’t work (at least not on my W2k, Geforce 2 system).

Yeah most opengl libs deny this from the users, so that threads can’t
mess with each other.

It’s not the OpenGL implementation’s job to forbid me from handling
serialization myself.

Your problem has an alternative and I would say more efective route.
don’t bind the decoded stuff to texture in decode thread.

I don’t have a problem; I have fully working code, I just had to spend
more code on it than I should have due to broken Windows (and possibly
others) behavior.

At one point, I even had to cache the results of glGetIntegerv(GL_MAX_TEXTURE_SIZE)
since it won’t let me call it in another thread. That’s ridiculous.On Sun, Oct 05, 2003 at 04:17:41AM +0300, Sami N??t?nen wrote:


Glenn Maynard

Yeah most opengl libs deny this from the users, so that threads
can’t mess with each other.

It’s not the OpenGL implementation’s job to forbid me from handling
serialization myself.

Well it has to keep record of the OGL state right? So it is done the
easiest and most secure way by doing it so through PID’s. It has this
unfortunate side efect, so deal with it. I think that you can make it
even better and faster by doing it the way I said in the earlier post.

Your problem has an alternative and I would say more efective
route. don’t bind the decoded stuff to texture in decode thread.

I don’t have a problem; I have fully working code, I just had to
spend more code on it than I should have due to broken Windows (and
possibly others) behavior.

It is not broken it is designed to to work like this, because the
alternative way would be to use an extra parameter to EVERY call in the
library to identify the scope where the library should operate. This is
not reasonable or what you think?

In my opinion your design is wrong in another state of idea too. You
know that OpenGL calls return almost immediately those simply adds
stuff to the render pipeline and you have no way of telling in which
position in the pipeline the rendering is going on. So if you try to
cahnge the texture when it is used it would lead to unexpected and
wrong results don’t you think?

At one point, I even had to cache the results of
glGetIntegerv(GL_MAX_TEXTURE_SIZE) since it won’t let me call it in
another thread. That’s ridiculous.

Well you only think there is your threads that can use this library, but
the might be others and the authors of that lib has to deal with that.On Sunday 05 October 2003 05:10, Glenn Maynard wrote:

On Sun, Oct 05, 2003 at 04:17:41AM +0300, Sami N??t?nen wrote:

Well it has to keep record of the OGL state right? So it is done the
easiest and most secure way by doing it so through PID’s. It has this
unfortunate side efect, so deal with it. I think that you can make it
even better and faster by doing it the way I said in the earlier post.

Um. Threads share memory. (PIDs? We’re talking about Windows.)

It is not broken it is designed to to work like this, because the
alternative way would be to use an extra parameter to EVERY call in the
library to identify the scope where the library should operate. This is
not reasonable or what you think?

What are you talking about?

The solution is to allow the same context to be selected into more than
one thread.

(Not that Microsoft is listening.)

In my opinion your design is wrong in another state of idea too. You
know that OpenGL calls return almost immediately those simply adds
stuff to the render pipeline and you have no way of telling in which
position in the pipeline the rendering is going on. So if you try to
cahnge the texture when it is used it would lead to unexpected and
wrong results don’t you think?

Um, it doesn’t matter which thread I’m calling a function from. If I
update a texture, the renderer flushes rendering. Threads don’t change
that at all; if that update texture happens in another thread, then it
should still be flushed.

At one point, I even had to cache the results of
glGetIntegerv(GL_MAX_TEXTURE_SIZE) since it won’t let me call it in
another thread. That’s ridiculous.

Well you only think there is your threads that can use this library, but
the might be others and the authors of that lib has to deal with that.

Sorry, that really didn’t make any sense.

Anyhow, this is pointless and off-topic. Let’s drop this.On Sun, Oct 05, 2003 at 08:37:04PM +0300, Sami N??t?nen wrote:


Glenn Maynard

What are you talking about?

The solution is to allow the same context to be selected into more than
one thread.

(Not that Microsoft is listening.)

This isn’t Microsoft-specific. GLX also requires that a context is
bound to one and only one thread.

I would note that this requirement enables drivers to be
non-threadsafe, therefore lockless, for performance.

I would also note that you can share display lists across contexts in
both wgl and glx.–
Petri Latvala