With the release of the MacBook Pro Retina, Apple has separated “points” from “pixels” on the OS X platform, the same way that they have on iOS. The mechanism is a little bit different, but the concept seems to be the same; a window which is reported as being 800x600 points may actually contain 1600x1200 pixels (or more or less, depending on a whole range of factors)
Under OS X, by default OpenGL works in a sort of compatibility mode in which you get one pixel per “point”, which matches the pixel density you get on normal displays, but results in blocky-looking graphics on retina displays. To use the full resolution of a high-density display, you need to call “[view setWantsBestResolutionOpenGLSurface:YES];” to tell OSX that you want to run with something other than the standard “one pixel per ‘point’” resolution. In my copy of SDL 1.2, I’ve done this in SDL_QuartzVideo.m, around lines 830 and 1080, where the OpenGL view is being created. My added code in both places looks like this:
Code:
if ( [window_view respondsToSelector:@selector(setWantsBestResolutionOpenGLSurface:) ] )
{
[ window_view setWantsBestResolutionOpenGLSurface:YES ];
}
This works for setting up the view to be retina-compatible (and is backwards-compatible to earlier OSX versions where the option wasn’t available), but still doesn’t expose the actual pixel dimensions to client code; we’re left using “points” instead of “pixels” for almost everything; SDL_SetVideoMode takes a size in points, SDL_ListModes(NULL, SDL_FULLSCREEN|SDL_HWSURFACE) returns screen sizes in points, SDL_VIDEORESIZE events report resize events in points, etc. And so when client code passes those dimensions to glViewport(), the client code ends up drawing to only a small portion of the OpenGL surface, and I haven’t found an SDL interface which seems to be intended to be used by client code to convert between these “point” values and the actual underlying number of pixels.
Half the time, I suspect that the easiest thing to do would simply be to make all of these functions use pixels, rather than points. The only real downside of that would be that if someone asks for a 800x600 video mode, that’s going to make a very small window on a retina display, and a much larger one on a non-retina display. Which maybe isn’t such a terrible thing.
But I imagine that this “convert from points to pixels” side of things has already been worked out for SDL’s iOS retina support, so maybe it’s just a matter of porting the general strategy that’s in use there over to the OSX side of SDL. Has anybody else already started in this direction? (And should I be doing this experimentation in SDL2, instead of 1.2? I haven’t really gotten my feet wet with version 2 yet)