glReadPixels works for color, but returns 0's and 1's for depth in NVIDIA FleX simulator (SDL 2.0.4)

Greetings,

I’m trying to simulate a depth camera in an OpenGL scene. I’m using the NVIDIA FleX simulator with SDL 2.0.4.

I’ve been able to successfully simulate a depth camera in a smaller code sample which uses GLFW:

float pixels_d[ 1 ] = { 0 };
glReadPixels( 0, 0, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, pixels_d );
printf("shaders depth %f\n", pixels_d[ 0 ]);


unsigned char pixels[ 1 * 1 * 4 ] = { 0 };
glReadPixels( 0, 0, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixels );
printf("shaders RGB %i   ", pixels[ 0 ]);

I performed a number of checks and I’m quite sure these return good values in my small code sample. However, when I put these segments of code into the FleX simulator, only the RGB pixel reader works. Depth just always returns a 0 or 1, showing no decimal places and appearing as an int on the terminal. Same story for grabbing a whole image and putting it into a pointer array rather than grabbing a single pixel. I have tried a variety of commands to initialize the depth reading including glEnable(GL_DEPTH_TEST); , glDepthFunc(GL_LESS); , and glClear(GL_DEPTH_BUFFER_BIT); , to no avail. Here are my SDL attributes:


SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);

SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_COMPATIBILITY);

// Turn on double buffering with a 24bit Z buffer.
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
SDL_GL_SetAttribute(SDL_GL_BUFFER_SIZE, 32);

SDL_GL_CreateContext(window);
SDL_GL_SetSwapInterval(1);

I’ve tried every combination bit types in the SDL depth size attribute (16, 24, 32) and in the glReadPixels (GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, etc with respective vars.) function. Of note: My SDL version cannot accept a depth size of 32 and just seg faults so I’m stuck with 16 or 24. When I try to read one of these using glReadPixels with GL_DEPTH_COMPONENT, which is a 32 bit floating point, it returns a 1. Reading any of them with a 16 or a 24 in glReadPixels outputs a 0. Neither 1 nor 0 are floating point, so if it’s an image it just spits out a binary array. (with my sample GFLW code it outputs a floating point between 0 and 1).

It seems to me that there is some bug in SDL or FleX or that there is some openGL or SDL flag in the FleX rendering code that is blocking depth from being read, but I can’t figure out what it is. Here is a link to most of the rendering code in FleX: https://github.com/henryclever/FleX_PyBind11/blob/master/demo/opengl/shadersGL.cpp. Beware, it’s over 3000 lines.

Any ideas? I’ve been stumped on this problem for too long!!

Thanks,
Henry C.
www.hmclever.com

I tried changing the openGL version:

	SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 4);
	SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 6);

I can verify that the simulator works with openGL 4.6, but it doesn’t make a difference to fixing the problem - still stuck.

Henry

Another update: While the RGB pixel reading outputs values in the range from 0-255, it gives a garbage RGB image that doesn’t resemble the scene. So it appears that neither are working for SDL.

-Henry