SDL 1.3 on Mac OS X speed hit

Hi, I thought I would share a few tidbits of what I know.

glBegin()/glEnd():
Interestingly, OpenGL 3 will actually eliminate glBegin()/glEnd() aka
immediate mode stuff. So avoiding the API is not just about
performance anymore :stuck_out_tongue:

Quick notes about the multithreaded engine:
First it you have an older version of Xcode or OS X that predates
10.4.7 (or was it 10.4.8), the constant kCGLCEMPEngine is not defined.
I recommend defining it or just using the explicit number 313. (I got
bit by this in some other projects.)

Second, the multithreaded engine performance will vary with how your
application uses OpenGL. If you don’t dispatch enough commands to keep
the OpenGL pipeline busy, you may actually see worse performance due
to the extra overhead incurred by the multithreaded OpenGL engine. (I
saw this with some trivial OpenGL programs I wrote.) So we will need
to benchmark to figure out if it is worth activating or not, or if it
needs to be exposed to the end user API to enable based on their own
application.

Pixel Formats:
As for pixel format stuff, Apple keeps stating you should use a
particular pixel format for optimal performance (GL_BGRA,
GL_UNSIGNED_INT_8_8_8_REV plus some other stuff you have to do in the
image loaders). I’m a little dazzled by it because I think they play a
few games to deal with the endian differences between PowerPC and
Intel. They do have a new developer example called OpenGLScreenShot
online which talks about it in the code.

Also, I’ve been working on some ImageIO stuff for another project,
though I mentioned to Sam I was interested in seeing the SDL_image
library for 1.3 use ImageIO as the backend also. A few interesting
tidbits about ImageIO. They use opaque data types for the Image
references. To actually get at the pixel data, you must “draw” the
image to a graphics context. I noticed that for 24-bit RGB images, you
cannot actually create a 24-bit context and it must be a 32-bit
context. Also the Accelerate framework vImage APIs don’t really like
dealing with RGB and want to deal with ARGB. They have one set of APIs
that will let you convert between 24-bit to 32-bit. I have not
experimented yet with 16-bit formats. But I’m wondering if it’s just
better to always convert immediately to 32-bit for the image loaders.

Thanks,
Eric

saw this with some trivial OpenGL programs I wrote.) So we will need
to benchmark to figure out if it is worth activating or not, or if it
needs to be exposed to the end user API to enable based on their own
application.

Yeah, maybe this should be something flagged with SDL_GL_SetAttribute(),
which is a no-op on most platforms.

SDL_GL_MULTITHREADED

As long as there’s a big warning sign next to it that says “THIS DOES
NOT MEAN ‘MAKE THE GL THREAD SAFE’ ON ANY PLATFORM.”

–ryan.