OSX and pBuffers?

hiya,

…I’ve been trying to port some demos to SDL that use pbuffers…I
know, SDL12 doesn’t include a pbuffer api, but I figured I could work
around that, but unfortunately I haven’t been able to on OSX…

…my first attempt was to use the cgl pbuffer api’s (because cgl
underlies all the other opengl api’s on OSX), but these seem to be
broken (several bugs filed to apple)…now I’m looking at the quartz
video driver files in SDL, and realized that the underlying opengl
context is a NSOpenGLContext* (in SDL_PrivateVideoData)…

…so, does anyone have any ideas how to access this context, and if
possible, use it with the other apple pbuffer api’s? For instance, it
seems that apple wants people to “share” contexts, so would I have to
use the NSOpenGL api for pbuffers? If so, can I do this thru plain old
c/c++, without using objc? Also, is it possible to use NSOpenGLContext
with cgl or agl pbuffer calls?

thanx,
jamie

Why not just rewrite the demos to do the offscreen rendering without
pbuffers? IMHO it wouldn’t require much workOn Aug 3, 2004, at 1:04 PM, James Tittle II wrote:

hiya,

…I’ve been trying to port some demos to SDL that use pbuffers…I
know, SDL12 doesn’t include a pbuffer api, but I figured I could work
around that, but unfortunately I haven’t been able to on OSX…

…my first attempt was to use the cgl pbuffer api’s (because cgl
underlies all the other opengl api’s on OSX), but these seem to be
broken (several bugs filed to apple)…now I’m looking at the quartz
video driver files in SDL, and realized that the underlying opengl
context is a NSOpenGLContext* (in SDL_PrivateVideoData)…

…so, does anyone have any ideas how to access this context, and if
possible, use it with the other apple pbuffer api’s? For instance, it
seems that apple wants people to “share” contexts, so would I have to
use the NSOpenGL api for pbuffers? If so, can I do this thru plain
old c/c++, without using objc? Also, is it possible to use
NSOpenGLContext with cgl or agl pbuffer calls?

thanx,
jamie


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

  • Donny Viszneki

hey donny,On Aug 3, 2004, at 2:15 PM, Donny Viszneki wrote:

Why not just rewrite the demos to do the offscreen rendering without
pbuffers? IMHO it wouldn’t require much work

…yeh, that was actually the last thing I tried, just render to
texture…but I ran into problems with the view being wrong; it didn’t
align to the picture it was supposed to fit…so I thought that maybe
using pbuffers would help reduce the amount of context setup switches:
maybe that’s not the case…

…in general, what are the advantages of pbuffers if you can just
accomplish the same thing with a render to texture?

thanx,
jamie

James Tittle II wrote:

…yeh, that was actually the last thing I tried, just render to
texture…but I ran into problems with the view being wrong; it didn’t
align to the picture it was supposed to fit…so I thought that maybe
using pbuffers would help reduce the amount of context setup
switches: maybe that’s not the case…

…in general, what are the advantages of pbuffers if you can just
accomplish the same thing with a render to texture?

  • pbuffers can use a high precision format (like float)
  • you don’t need to have an OpenGL window to render to a pbuffer

Also, the best way to support render to texture in a portable way might
be adding support to it into SDL. The 1.3 branch already has support for
render-to-texture-like functionnality.

Stephane

Stephane Marchesin wrote:

James Tittle II wrote:

…yeh, that was actually the last thing I tried, just render to
texture…but I ran into problems with the view being wrong; it
didn’t align to the picture it was supposed to fit…so I thought
that maybe using pbuffers would help reduce the amount of context
setup switches: maybe that’s not the case…

…in general, what are the advantages of pbuffers if you can just
accomplish the same thing with a render to texture?

  • pbuffers can use a high precision format (like float)
  • you don’t need to have an OpenGL window to render to a pbuffer

Also, the best way to support render to texture in a portable way
might be adding support to it into SDL. The 1.3 branch already has
support for render-to-texture-like functionnality.

… under windows and linux only (or else there wouldn’t be any coding
to do :slight_smile:

Stephane

Stephane Marchesin wrote:

Also, the best way to support render to texture in a portable way
might be adding support to it into SDL. The 1.3 branch already has
support for render-to-texture-like functionnality.

… under windows and linux only (or else there wouldn’t be any coding
to do :slight_smile:

Currently to render to a texture using SDL and OpenGL I double buffer
and use glTexImage2D to copy something I rendered onto a texture. It
seems pretty fast, even though I originally had my doubts.

  • Donny VisznekiOn Aug 3, 2004, at 2:42 PM, Stephane Marchesin wrote:

Donny Viszneki wrote:

Currently to render to a texture using SDL and OpenGL I double buffer
and use glTexImage2D to copy something I rendered onto a texture. It
seems pretty fast, even though I originally had my doubts.

That’s about what SDL does under x11. Under x11, there are pbuffers, but
the current glx interface makes it impossible to have render to texture.
So SDL emulates render to texture functionnality using a pbuffer +
glCopyTexSubImage2D from the pbuffer to a texture. And if the driver is
smart, that copy is accelerated using a hardware blit from/to the card’s
memory, so that’s almost as fast as render to texture.

Stephane

Donny Viszneki wrote:

Currently to render to a texture using SDL and OpenGL I double buffer
and use glTexImage2D to copy something I rendered onto a texture. It
seems pretty fast, even though I originally had my doubts.

That’s about what SDL does under x11. Under x11, there are pbuffers,
but the current glx interface makes it impossible to have render to
texture. So SDL emulates render to texture functionnality using a
pbuffer + glCopyTexSubImage2D from the pbuffer to a texture. And if
the driver is smart, that copy is accelerated using a hardware blit
from/to the card’s memory, so that’s almost as fast as render to
texture.

Stephane

Will my way work on X? I won’t be able to try for myself until I get my
linux box back from its “vacation.” I’m investing quite a bit of time
into methods that use render to texture for fully accelerated
procedural special effects for things like flames, water ripples, and
motion blurs, I’d hate to find out it doesn’t work under my favorite
operating system!On Aug 3, 2004, at 3:39 PM, Stephane Marchesin wrote:

James Tittle II wrote:

…in general, what are the advantages of pbuffers if you can just
accomplish the same thing with a render to texture?

  • pbuffers can use a high precision format (like float)
  • you don’t need to have an OpenGL window to render to a pbuffer

Also, the best way to support render to texture in a portable way
might be adding support to it into SDL. The 1.3 branch already has
support for render-to-texture-like functionnality.

…well, I definitely need floating point buffers :wink: But, it seems
that apple has an extension that allows this for things beyond
pbuffers, too! (GL_APPLE_float_pixels)

http://developer.apple.com/opengl/extensions/apple_float_pixels.html

…time to give this render to texture stuff another whirl…

l8r,
jamieOn Aug 3, 2004, at 2:39 PM, Stephane Marchesin wrote:

http://developer.apple.com/opengl/extensions/apple_float_pixels.html

I’m just a little confused here… what are floating point pixel values
supposed to be good for?

  • Donny VisznekiOn Aug 4, 2004, at 10:40 AM, James Tittle II wrote:

Donny Viszneki wrote:

http://developer.apple.com/opengl/extensions/apple_float_pixels.html

I’m just a little confused here… what are floating point pixel
values supposed to be good for?

  • Donny Viszneki

Its the way OpenGL addresses points on a screen. Rather than using pixel
values (which change with screen res), use it as a percentage value.
OpenGL then automatically translates into whatever resolution it’s
currently running in.

It is a good idea, as it means you can use fonts and they will stay the
same size with increased res, but get sharper, instead of raster fonts
(such as in windows) which will get smaller as the screen res increases.

It’s also more intuitive to think of a percentage of the screen for
points rather than a certain number of pixels out of a changing total.
The only downside I can think of is that if the screen is not a 4x3
resolution, graphics will be distorted, whereas with pixel addressing it
won’t.> On Aug 4, 2004, at 10:40 AM, James Tittle II wrote:

http://developer.apple.com/opengl/extensions/apple_float_pixels.html

I’m just a little confused here… what are floating point pixel
values supposed to be good for?

…floating point pixels are good for hi resolution colors (color
values aren’t naturally integers, they’re more continuous), and allow
for better color manipulation calculations because ya don’t have to
worry about saturation/overflow so much…they make really nice “high
dynamic range” calculations possible…

jamieOn Aug 5, 2004, at 1:25 AM, Donny Viszneki wrote:

On Aug 4, 2004, at 10:40 AM, James Tittle II wrote:

I’m just a little confused here… what are floating point pixel
values supposed to be good for?

  • Donny Viszneki

When I asked this question, I had two interpretations in mind, and I
got both of them as answers. In retrospect, they both seem kind of
silly…

Its the way OpenGL addresses points on a screen. Rather than using
pixel values (which change with screen res), use it as a percentage
value. OpenGL then automatically translates into whatever resolution
it’s currently running in.

It is a good idea, as it means you can use fonts and they will stay
the same size with increased res, but get sharper, instead of raster
fonts (such as in windows) which will get smaller as the screen res
increases.

It’s also more intuitive to think of a percentage of the screen for
points rather than a certain number of pixels out of a changing total.
The only downside I can think of is that if the screen is not a 4x3
resolution, graphics will be distorted, whereas with pixel addressing
it won’t.

All your statements are correct, but they apply to 2D graphics. OpenGL
isn’t really geared toward 2D graphics (though, read on, 2D graphics
are included in my thoughts.) When the 2D graphics in question are a
user interface (think modern window composition like Quartz Extreme,)
if you wanted fonts to scale correctly, I can see no reason you
couldn’t actually render them using the 3D acceleration provided by
OpenGL.

I’ve heard things about future video cards from NVIDIA supporting font
acceleration, which would probably do just that, and I hope (but it is
probably just some wishful thinking) some spacing and wrapping as well.On Aug 5, 2004, at 4:49 AM, Aaron Deadman wrote:

On Aug 5, 2004, at 9:12 AM, James Tittle II wrote:

…floating point pixels are good for hi resolution colors (color
values aren’t naturally integers, they’re more continuous), and allow
for better color manipulation calculations because ya don’t have to
worry about saturation/

I don’t see why a gamer would want more than 16,777,216 colors.

The only practical improvement to make to color would be to leave
behind the archaic RGB system that we’ve been bound to thanks to the
colors we can produce using light combinations. As a result, we have a
system that provides us millions of colors, many of them indistinct to
the human eye. As I understand it there are other color formats geared
toward evenly distributed values with what the human eye can actually
see, but I don’t really understand much about the actual distribution.

Anyhow, I’m sure one of you is right, no matter what flaws I can see /
construe in them. If someone would like to clear this up, please do!

  • Donny Viszneki

…floating point pixels are good for hi resolution colors (color
values aren’t naturally integers, they’re more continuous), and allow
for better color manipulation calculations because ya don’t have to
worry about saturation/

I don’t see why a gamer would want more than 16,777,216 colors.

…wow, that’s kinda like the bill gates “640KB should be enough for
everybody” attribution :wink:

The only practical improvement to make to color would be to leave
behind the archaic RGB system that we’ve been bound to thanks to the
colors we can produce using light combinations. As a result, we have a
system that provides us millions of colors, many of them indistinct to
the human eye. As I understand it there are other color formats geared
toward evenly distributed values with what the human eye can actually
see, but I don’t really understand much about the actual distribution.

…um, you’re right that RGB is archaic and not very similar to our
actual vision, but it’s still used because that’s how hardware is
built…YUV is a better match to our visual sensitivities because we
are more sensitive to changes in luma/brightness than to color changes
(plus it takes up half the space of an RGBA pixel!)…HSV is also
nicer…

…if ya really wanna be convinced about the benefits, look up High
Dynamic Range (HDR) lighting techniques…the short of it is that by
adding pixels as floating point numbers you avoid the saturation that
by being bound to RGB’s 0-1 range, and so you can make more realistic
lighting scenes (think glows and diffusion)…check out the following
link for more info:

http://gamedev.net/reference/articles/article2108.asp

l8r,
jamieOn Aug 6, 2004, at 12:59 AM, Donny Viszneki wrote:

On Aug 5, 2004, at 9:12 AM, James Tittle II wrote:

Donny Viszneki wrote:

Donny Viszneki wrote:

Currently to render to a texture using SDL and OpenGL I double
buffer and use glTexImage2D to copy something I rendered onto a
texture. It seems pretty fast, even though I originally had my doubts.

That’s about what SDL does under x11. Under x11, there are pbuffers,
but the current glx interface makes it impossible to have render to
texture. So SDL emulates render to texture functionnality using a
pbuffer + glCopyTexSubImage2D from the pbuffer to a texture. And if
the driver is smart, that copy is accelerated using a hardware blit
from/to the card’s memory, so that’s almost as fast as render to
texture.

Stephane

Will my way work on X? I won’t be able to try for myself until I get
my linux box back from its “vacation.” I’m investing quite a bit of
time into methods that use render to texture for fully accelerated
procedural special effects for things like flames, water ripples, and
motion blurs, I’d hate to find out it doesn’t work under my favorite
operating system!

That’ll work, but that might be a bit less efficient than SDL’s way,
since you are reading back from a buffer you’ll wrtie to shortly after.
OTOH, it doesn’t require pbuffers…

Stephane> On Aug 3, 2004, at 3:39 PM, Stephane Marchesin wrote:

Donny Viszneki wrote:

Donny Viszneki wrote:

Currently to render to a texture using SDL and OpenGL I double
buffer and use glTexImage2D to copy something I rendered onto a
texture. It seems pretty fast, even though I originally had my
doubts.

That’s about what SDL does under x11. Under x11, there are pbuffers,
but the current glx interface makes it impossible to have render to
texture. So SDL emulates render to texture functionnality using a
pbuffer + glCopyTexSubImage2D from the pbuffer to a texture. And if
the driver is smart, that copy is accelerated using a hardware blit
from/to the card’s memory, so that’s almost as fast as render to
texture.

Stephane

Will my way work on X? I won’t be able to try for myself until I get
my linux box back from its “vacation.” I’m investing quite a bit of
time into methods that use render to texture for fully accelerated
procedural special effects for things like flames, water ripples, and
motion blurs, I’d hate to find out it doesn’t work under my favorite
operating system!

That’ll work, but that might be a bit less efficient than SDL’s way,
since you are reading back from a buffer you’ll wrtie to shortly
after. OTOH, it doesn’t require pbuffers…

Stephane

Well actually only the video card access the surface(s) in question.
That was the whole purpose of my “experimenting” with feedback-based
effects, was to see how I could create dynamic special effects produced
entirely by the graphics card with hardly any work being done on the
CPU at all.


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

  • Donny VisznekiOn Aug 6, 2004, at 2:23 PM, Stephane Marchesin wrote:

On Aug 3, 2004, at 3:39 PM, Stephane Marchesin wrote:

…floating point pixels are good for hi resolution colors (color
values aren’t naturally integers, they’re more continuous), and
allow for better color manipulation calculations because ya don’t
have to worry about saturation/

I don’t see why a gamer would want more than 16,777,216 colors.

…wow, that’s kinda like the bill gates “640KB should be enough for
everybody” attribution :wink:

The only practical improvement to make to color would be to leave
behind the archaic RGB system that we’ve been bound to thanks to the
colors we can produce using light combinations. As a result, we have
a system that provides us millions of colors, many of them indistinct
to the human eye. As I understand it there are other color formats
geared toward evenly distributed values with what the human eye can
actually see, but I don’t really understand much about the actual
distribution.

…um, you’re right that RGB is archaic and not very similar to our
actual vision, but it’s still used because that’s how hardware is
built…YUV is a better match to our visual sensitivities because we
are more sensitive to changes in luma/brightness than to color changes
(plus it takes up half the space of an RGBA pixel!)…HSV is also
nicer…

…if ya really wanna be convinced about the benefits, look up High
Dynamic Range (HDR) lighting techniques…the short of it is that by
adding pixels as floating point numbers you avoid the saturation that
by being bound to RGB’s 0-1 range, and so you can make more realistic
lighting scenes (think glows and diffusion)…check out the following
link for more info:

http://gamedev.net/reference/articles/article2108.asp

That’s an interesting article. However, you could have really summed
this up by saying "not actually more than 16.7M colors on the display."
Because that’s really what it sounded like you were saying to me. It
sounds like a neat hack for representing the ambience cast by extremely
bright lightsources in real life. From a practical perspective, HDR may
be the only solution for real-time video hardware to produce those
sorts of effects. But I have a feeling that if video cards start being
a little more innovative, that sort of functionality may exist without
having to be hacked in. (Personally I’m a little sick with how
non-innovative graphics hardware companies have become over the past 6
years or so. It seems like they aren’t doing anything really new
anymore.)

  • Donny VisznekiOn Aug 6, 2004, at 12:09 PM, James Tittle II wrote:

On Aug 6, 2004, at 12:59 AM, Donny Viszneki wrote:

On Aug 5, 2004, at 9:12 AM, James Tittle II wrote:

l8r,
jamie


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl