Depth buffer

Hi,

Are there any tutorials around for using SDL with the OpenGL Z (depth) buffer? I need to do a lot of work with overlaid images and know nothing about this!!

Many Thanks
Ed___________________________________________________________
The all-new Yahoo! Mail goes wherever you go - free your email address from your Internet provider. http://uk.docs.yahoo.com/nowyoucan.html

Are there any tutorials around for using SDL with the OpenGL Z (depth)
buffer? I need to do a lot of work with overlaid images and know nothing
about this!!

Depth-buffering is very simple in OpenGL. Just make sure to call

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE,32);

before calling SDL_SetVideoMode(). Note that a 32-bit depth-buffer is
requested - it’s most likely not possible to get, but it assures that
you’ll be supplied with the best possible bit-resolution (24 bits on
most recent video hardware).

From this point it’s just plain OpenGL, no more SDL specifics. I’ll
suggest you to read NeHe’s OpenGL tutorials, they’re very good:
http://nehe.gamedev.net

Also check out: http://www.libsdl.org/opengl/index.php--
Regards,
Rasmus Neckelmann

before calling SDL_SetVideoMode(). Note that a 32-bit depth-buffer is
requested - it’s most likely not possible to get, but it assures that
you’ll be supplied with the best possible bit-resolution (24 bits on
most recent video hardware).

Not true: on X11, SDL_SetVideoMode() will fail if it can’t get the
requested depth buffer (or maybe just “at least the requested depth
buffer”). SDL_GetError() will report “Couldn’t find matching GLX visual”.

Other platforms will likely do the same.

SDL doesn’t protect you here like it does with shadow surface fallbacks
on 2D visuals, or automatic audio conversion.

A better strategy is to ask for what you want through
SDL_GL_SetAttribute(), and if SDL_SetVideoMode() fails, start lowering
your requirements and try again until you either get a GL context or you
can’t work with the limitations.

Quake 3 actually does this pretty well:

http://svn.icculus.org/checkout/quake3/trunk/code/unix/sdl_glimp.c

GLW_SetMode() is what I’m talking about. Ignore the quake-specific bits
and look at the for-loop. It tries everything at high quality, and
failing that, it starts reducing one attribute at a time, in the order
of importance, until SDL_SetVideoMode() succeeds or the visual wouldn’t
work out for the game.

The code itself could be cleaner with some enums and better structuring,
but the concept is basically perfect for this.

–ryan.

Sorry for the mis-information… :slight_smile:

I’m pretty sure you can do it on Win32 though, so that’s probably what
have fooled me.On 11/22/06, Ryan C. Gordon wrote:

before calling SDL_SetVideoMode(). Note that a 32-bit depth-buffer is
requested - it’s most likely not possible to get, but it assures that
you’ll be supplied with the best possible bit-resolution (24 bits on
most recent video hardware).

Not true: on X11, SDL_SetVideoMode() will fail if it can’t get the
requested depth buffer (or maybe just “at least the requested depth
buffer”). SDL_GetError() will report “Couldn’t find matching GLX visual”.


Regards,
Rasmus Neckelmann