[SDL1.3 hg] Sharing a SDL_GLContext between two SDL_Windows

Hi.

I’m currently sharing a single SDL_GLContext between two SDL_Windows (one on
each display, the 2nd of which has vsync enabled).

I create a 3rd invisible window, which has dimensions equal to the greater
dimensions of either window, and then call SDL_GL_CreateContext() on its
windowID.

Each frame, I:

  1. Render the 1st window
  2. Call SDL_GL_SetSwapInterval(false) disabling vsync
  3. Call SDL_GL_SwapWindow()
  4. Call SDL_GL_MakeCurrent() for the 2nd window
  5. Render the 2nd window
  6. Call SDL_GL_SetSwapInterval(true), enabling
  7. Call SDL_GL_SwapWindow()

This seems to work pretty well, but I’m wondering if there’s a better way to
do what I want to do, or if there are any performance implications I should
be aware of…

The reason I’m sharing a glContext is because I’m writing VJ software the
plays back multiple video simultaneously, so I’d like to only update one set
of glTextures.

Has anybody else solved this sort of problem in a way they believe to be
more efficient?

You could align one window to be as big as 2 or three monitors (the OS
should make this fairly automatic). From there, query the size of each
monitor and divide the window into the three sizes (SDL_Rect would come in
handy). Next, create three seperate threads and a mutex (to be shared) -
each thread locks the mutex, draws on it’s portion of the window, then
unlocks the mutex - this will probably be less hardware hungry. You’d have
to make sure that the OS “aligns” each monitor to the right, or something
like that, though - it’s very dependant on how the monitors are setup.

Does that help at all?
-Alex

PS: Otherwise, your implementation is more independant on the monitor setup,
and probably isn’t that hardware intensive - especially if you’re using a
nice gfx card/cpu (which most VJs should have, I’d assume).On Sun, Dec 19, 2010 at 2:31 AM, interim_descriptor < interim.descriptor at gmail.com> wrote:

Hi.

I’m currently sharing a single SDL_GLContext between two SDL_Windows (one
on each display, the 2nd of which has vsync enabled).

I create a 3rd invisible window, which has dimensions equal to the greater
dimensions of either window, and then call SDL_GL_CreateContext() on its
windowID.

Each frame, I:

  1. Render the 1st window
  2. Call SDL_GL_SetSwapInterval(false) disabling vsync
  3. Call SDL_GL_SwapWindow()
  4. Call SDL_GL_MakeCurrent() for the 2nd window
  5. Render the 2nd window
  6. Call SDL_GL_SetSwapInterval(true), enabling
  7. Call SDL_GL_SwapWindow()

This seems to work pretty well, but I’m wondering if there’s a better way
to do what I want to do, or if there are any performance implications I
should be aware of…

The reason I’m sharing a glContext is because I’m writing VJ software the
plays back multiple video simultaneously, so I’d like to only update one set
of glTextures.

Has anybody else solved this sort of problem in a way they believe to be
more efficient?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Thanks for your suggestion!

In fact, I had indeed been previously using one very large window that spans
all displays (with a single rendering thread).

The issue that made me depart from this method was vsync and framerate: if I
turned on vsync, I couldn’t tell OSX to which display I wanted to vsync my
multi-display-spanning window. This resulted in really strange behavior,
where I was getting 30 fps for 1 second, then 60 fps for the next, back and
forth like clockwork, for no apparent reason! I couldn’t resolve this, so I
went to 1 window per display, which allowed me to maintain 60fps, with vsync
only applied to the 2nd display.On Sun, Dec 19, 2010 at 6:12 AM, Alex Barry <alex.barry at gmail.com> wrote:

You could align one window to be as big as 2 or three monitors (the OS
should make this fairly automatic). From there, query the size of each
monitor and divide the window into the three sizes (SDL_Rect would come in
handy). Next, create three seperate threads and a mutex (to be shared) -
each thread locks the mutex, draws on it’s portion of the window, then
unlocks the mutex - this will probably be less hardware hungry. You’d have
to make sure that the OS “aligns” each monitor to the right, or something
like that, though - it’s very dependant on how the monitors are setup.

Does that help at all?
-Alex

PS: Otherwise, your implementation is more independant on the monitor
setup, and probably isn’t that hardware intensive - especially if you’re
using a nice gfx card/cpu (which most VJs should have, I’d assume).

On Sun, Dec 19, 2010 at 2:31 AM, interim_descriptor < @interim_descriptor> wrote:

Hi.

I’m currently sharing a single SDL_GLContext between two SDL_Windows (one
on each display, the 2nd of which has vsync enabled).

I create a 3rd invisible window, which has dimensions equal to the greater
dimensions of either window, and then call SDL_GL_CreateContext() on its
windowID.

Each frame, I:

  1. Render the 1st window
  2. Call SDL_GL_SetSwapInterval(false) disabling vsync
  3. Call SDL_GL_SwapWindow()
  4. Call SDL_GL_MakeCurrent() for the 2nd window
  5. Render the 2nd window
  6. Call SDL_GL_SetSwapInterval(true), enabling
  7. Call SDL_GL_SwapWindow()

This seems to work pretty well, but I’m wondering if there’s a better way
to do what I want to do, or if there are any performance implications I
should be aware of…

The reason I’m sharing a glContext is because I’m writing VJ software the
plays back multiple video simultaneously, so I’d like to only update one set
of glTextures.

Has anybody else solved this sort of problem in a way they believe to be
more efficient?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org