Difference between Mac OS X and Windows render results

Greetings.
Please help…

I try to port some games from HGE engine to own, based on SDL/OpenGL.
Almost everything works fine :slight_smile:

But I discovered small difference in render results between rendering
through SDL/OpenGL on Windows and Mac OS X.

There are two screenshots:
Windows
http://motosvit.com/Anand/mac/Windows_screenshot.png

Mac OS X
http://motosvit.com/Anand/mac/MacOSX_screenshot.png

Corresponding .png files are here:
http://motosvit.com/Anand/mac/bg2.png
http://motosvit.com/Anand/mac/logo.png

Difference is around the logo - Mac OS X render it with dark border.
I tested on the same computer where both Mac OS X 10.5 and Vista are
installed - so video card was the same.

I got the same result on 10.4 (old PowerBook with PowerPC).

The question - what is the cause of this difference?

The test application is very simple. I pasted here OSX version. Windows
version differs only in path to image files. Renderers are OpenGL for
both cases.

I used latest SDL 1.3 and SDL_image from SVN to reproduce this behavior.

Thank you :slight_smile:

//
// main.m
// SDL_test
//
// Created by user on 3/18/09.
// Copyright MyCompanyName 2009. All rights reserved.
//
#include “SDL.h”
#include “SDL_image.h”
#include “BfxWorkingDirectory.h”

#include
#include <OpenGL/OpenGL.h>

int main(int argc, char** argv)
{
try
{
//Init SDL here - it will initialize SDL timer in right time
SDL_Init(SDL_INIT_EVERYTHING);

 if (SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1))
   return 1;

 if (SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 0))
   return 2;

 if (SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8))
   return 3;

 if (SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8))
   return 4;

 if (SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8))
   return 5;

 if (SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8))
   return 6;

 SDL_EnableUNICODE(1);


 bool fullscreen = false;//true;
 SDL_WindowID windowid = 0;

 unsigned int flags = (fullscreen ? SDL_WINDOW_FULLSCREEN : 0) | 

SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN;

 int windowwidth = 800;
 int windowheight = 600;

 if (!fullscreen)
   windowid = SDL_CreateWindow("Test application"	, 

SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, windowwidth, windowheight,
flags);
else
{
}
if (!windowid)
return 3;

 //select renderer
 int driver_count = SDL_GetNumRenderDrivers();
 int opengl_index = -1;
 for (int i=0; i<driver_count && opengl_index == -1; ++i)
 {
   SDL_RendererInfo info;
   SDL_GetRenderDriverInfo(i, &info);
   if (strcmp(info.name, "opengl") == 0)
     opengl_index = i;
 }

 if (opengl_index == -1)
   return 4;

 if (SDL_CreateRenderer(windowid, opengl_index, 0))
   return 5;

 //load the test image
 SetWorkingDirectoryToResources();

 SDL_Surface* background = IMG_Load("bg2.png");
 if (!background)
 {
   printf("Cant load image.\n");
   SDL_Quit();
   return 1;
 }

 SDL_TextureID backgroundTexture = SDL_CreateTextureFromSurface(0, 

background);
if (!backgroundTexture)
{
printf(“Cant create OpenGL texture from surface.\n”);
SDL_FreeSurface(background);
SDL_Quit();
return 1;
}

 SDL_FreeSurface(background);

 SDL_Surface* logo = IMG_Load("logo.png");
 if (!logo)
 {
   printf("Cant load image.\n");
   SDL_Quit();
   return 1;
 }

 SDL_TextureID logoTexture = SDL_CreateTextureFromSurface(0, logo);
 if (!logoTexture)
 {
   printf("Cant create OpenGL texture from surface. \n");
   SDL_FreeSurface(logo);
   SDL_Quit();
   return 1;
 }

 bool mousepressed = false;
 //run the event loop
 while (true)
 {
   SDL_Event _event;

   while(SDL_PollEvent(&_event)) {

     switch (_event.type)
     {
       case SDL_QUIT:
         return 0;

       case SDL_WINDOWEVENT:
       case SDL_SYSWMEVENT:
       case SDL_WINDOWEVENT_CLOSE:
         break;

       case SDL_MOUSEMOTION:

         break;

       case SDL_MOUSEBUTTONDOWN:
         mousepressed = true;
         break;

       case SDL_MOUSEBUTTONUP:
         mousepressed = false;
         break;


       case SDL_MOUSEWHEEL:
         break;

       case SDL_KEYDOWN:
         if (_event.key.keysym.scancode == SDL_SCANCODE_ESCAPE)
           return 0;
         break;

     }

   }

   // Process keys

   SDL_Rect dest;
   dest.x = 0; dest.y = 0; dest.w = 800; dest.h = 600;

   SDL_RenderCopy(backgroundTexture, NULL, &dest);

   // SDL_BLENDMODE_NONE = 0x00000000,     /**< No blending */
   // SDL_BLENDMODE_MASK = 0x00000001,     /**< dst = A ? src : dst 

(alpha is mask) */
// SDL_BLENDMODE_BLEND = 0x00000002, /< dst = (src * A) +
(dst * (1-A)) */
// SDL_BLENDMODE_ADD = 0x00000004, /
< dst = (src * A) + dst */
// SDL_BLENDMODE_MOD = 0x00000008 /**< dst = src * dst */

   dest.x = 400 - 128;
   dest.y = 0;
   dest.w = 256;
   dest.h = 256;

   SDL_SetTextureBlendMode(logoTexture, 2);
   SDL_RenderCopy(logoTexture, NULL, &dest);

   SDL_RenderPresent();

   //do not waste 100% CPU
   SDL_Delay(1);

 }

 SDL_DestroyTexture(backgroundTexture);
 SDL_DestroyTexture(logoTexture);

 //quit
 SDL_DestroyRenderer(windowid);
 SDL_DestroyWindow(windowid);

 SDL_Quit();

}
catch(std::exception& e)
{
//std::cerr << "Exception occured: " << e.what() << endl;
}

return 0;
}

Hey. Here’s our theory:
http://bugzilla.libsdl.org/show_bug.cgi?id=868

Here’s a test gradient to try:

This one shades from FFFFFFFF to 00FFFF00.

Ok. We seem to have two issues.
One is an issue w/ PNG colour correction that is apparently being loaded by SDL and doesn’t work well with our hardcoded colours.
pngcrush -rem gAMA -rem cHRM -rem iCCP -rem sRGB seems to solve this.
Unfortunately it is completely different from the gradient problem.

Still trying to figure out what is up with that one, but about to verify that the SDL Surface values match what we think they should be before passing 'em to OpenGL.
Wondering if it could be some glBlendfunc thing.

nemo wrote:

Hey. Here’s our theory:
http://bugzilla.libsdl.org/show_bug.cgi?id=868

Here’s a test gradient to try:
http://m8y.org/tmp/Sky.png

This one shades from FFFFFFFF to 00FFFF00.

Unfortunately, while the effects of screwing up channel would look like what we’re seeing, and while our test gradient seemed to work, further testing with Red held at 100% didn’t seem to support the hypothesis.

Testing removing of iCCP/cHRM/gAMA next.

Now, if I take the image in GIMP, and merge it against an all-black layer, I now get the same values as OSX (not including the alpha of course):
B G R
8 5 4

instead of
157 92 84

So the bug appears to be that whatever is loading the surface is acting as if it is loading the image blitted against all black, then reapplying the alpha.

Just a WAG - anyway, the symptoms are the same.

Confirmed.
When we dumped under Linux, the values were correct.
Furthermore, due to different people editing the image, some transparent areas were:
A B G R
0 | 255 | 255 | 255

and some were
0 | 0 | 0 | 0

On different parts of the file, presumably due to different editors touching the png.

This is fine, we’ve always handled this in past where we needed to check fully transparent by checking the high byte, not the entire word.

Perhaps someone got the idea of normalising the two to 00000000 but did it in an odd fashion. :slight_smile:

So the workaround for now is indeed to take the OSX RGB values and brighten them proportional to the alpha in the surface.
This will have an unfortunate performance cost, but is luckily trivial. Hopefully this bug will be fixed quickly.

Final workaround:
{$IFDEF DARWIN}
tmpP := tmpsurf^.pixels;
for i:= 0 to (tmpsurf^.pitch shr 2) * tmpsurf^.h - 1 do
begin
tmpA:= tmpP^[i] shr 24 and $FF;
tmpR:= tmpP^[i] shr 16 and $FF;
tmpG:= tmpP^[i] shr 8 and $FF;
tmpB:= tmpP^[i] and $FF;
if tmpA <> 0 then
begin
tmpR:= round(tmpR * 255/tmpA);
tmpG:= round(tmpG * 255/tmpA);
tmpB:= round(tmpB * 255/tmpA);
end;
if tmpR > 255 then tmpR := 255;
if tmpG > 255 then tmpG := 255;
if tmpB > 255 then tmpB := 255;
tmpP^[i]:= (tmpA shl 24) or (tmpR shl 16) or (tmpG shl 8) or tmpB;
end;
{$ENDIF}

This is not as accurate at the low end due to the initial truncation. A slight bias could be introduced to help a bit.
Anyway, it works “good enough”

Hopefully this will be fixed soon and we can remove it.

Ok, dumping the first column of a buggy SDL surface, we noticed an interesting pattern:

A B G R

SDL on OSX
206 | 177 | 131 | 125

GIMP
206 | 220 | 163 | 155

SDL on OSX
14 | 8 | 5 | 4

GIMP
14 | 157 | 92 | 84

Pattern seems to be that if you take:
177 * (255/206) ~ 219

and
8 * (255/14) ~ 146

Now, that 2nd one is not so accurate. But.
157 / (255 / 14) = 8.58 which if truncated is 8.

220 / (255 / 206) = 177.72 = 177

So we could be dealing with some curious modification proportional to alpha and a floor operation to boot.

Fortunately the fix for that is fairly trivial, until SDL is fixed, we can ifdef DARWIN to reverse the damage.
The truncation means some accuracy is lost, but hopefully not too much.

Hmm.
I should process the loaded SDL_Surface in this way, right?

yes, and if you get swapped colors you just have to convert the surface like
this
http://www.idevgames.com/forum/showpost.php?p=85954&postcount=9

bye
VittorioOn Fri, Oct 30, 2009 at 12:19 PM, Dmytro Bogovych <dmytro.bogovych at gmail.com wrote:

Hmm.
I should process the loaded SDL_Surface in this way, right?


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Pablo Picassohttp://www.brainyquote.com/quotes/authors/p/pablo_picasso.html

  • “Computers are useless. They can only give you answers.”

Dmytro Bogovych wrote:

It did not help.
I use XCode 3 on Intel/10.5 to build application and test on PowerBook G4 / 10.4.

Well, if you are using PPC you could have a different byte order from us.
That little hack above does not do little-endian/big-endian.

Now I thought the order was consistent on all OSX, but maybe it isn’t.
Asking our OSX guy to dump out the pixel array so I can see.