Android Galaxy S3 and Fire TV opengl issues

Hello! I am experiencing some trouble and I’ve researched this a lot and didn’t find anything that solves the problem yet. First I will share some proof that the app use to work (this was using SDL 2.0.8) on the Galaxy S3 (firebase screenshot from the pre-launch report follows). I’m using SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE and gl = SDL_GL_CreateContext(sdlWindow);

The shader isn’t perfect when highp float isn’t available… but it otherwise worked on those devices… I was still wrapping up development, and SDL 2.0.9 was released, so I upgraded… I also upgraded many of the android SDK packages and had to change gradle versions in the process. Now the app doesn’t work right on the S3 at all… I realize its API level 18 device but its still disheartening and I’m struggling to make it work ANYWAY… since it sort of works I feel like there HAS to be a way to make it work.

(cannot post 2nd image yet, but suffice to say the final draw call is apparent and nothing else)

I had tried downgrading to the older revision of build.gradle and SDL 2.0.8 but some element of the android SDK must be too new, and or intentionally broke the older devices… I can’t figure it out, and I don’t have an S3 to play with… but I do have a Fire TV (older one) and I’ve been debugging what works and what doesn’t work on there, since it seems to be the same exact issue (based on screenshots). 2 days into it now, maybe a little more.

I am using most of the tips found in other threads. I’ve tried every combination of:

SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 0);
//    SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 5);
//    SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 6);
//    SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 5);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 0);
//    SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
//    SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 5);
//    SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);

I wrote a nasty macro to see the outputs of any of these (anyone better with these macros? the double param annoys me):

#define logGottenGlAtrib(name, literalAttrib) SDL_GL_GetAttribute(literalAttrib, &resultInt); \
SDL_Log("contexts %s %i", name, resultInt);
int resultInt = 0;

I am getting a result like this with the current settings:

contexts SDL_GL_DEPTH_SIZE 0
contexts SDL_GL_BUFFER_SIZE 32
contexts SDL_GL_RED_SIZE 8
contexts SDL_GL_GREEN_SIZE 8
contexts SDL_GL_BLUE_SIZE 8
contexts SDL_GL_ALPHA_SIZE 8
Open GL says we are OpenGL ES 2.0

I’m using a single call to SDL_GL_SwapWindow(window); when I’m done rendering. Also worth noting since this seemed like a vsync issue (some sort of over eager automatic buffer swapping) I have tried to disable it and also tried explicitly enabling it. I have figured out the following:

  1. only the final glDrawArrays will ever show up… when I comment out the later calls my earlier draw calls work perfectly… but when I try to render the UI (with or without enabling blending) it seems like the frame is either cleared or corrupted somehow.

I tried adding glFlush() and things like that between calls. I will probably experiment with sprinkling glFlush() on every other line of code next… I’m running out of ideas to try.

Also things like


Seem to be occurring out of order with the draw call… since I am rendering in a loop… while its clear some of these such as position work, others like color don’t work at all (if I hardcode a value it goes into the shader nicely, but otherwise I get a seemingly random value from a random object instead of the uniform value I just set). I can experiment with setting the uniforms a different way but since hardcoded values work I can’t conclude the above function is broken, only that its not working how I would expect.

I will try to get some more screenshots soon… I have some Expected/Actual screenshots that are really random and probably not helpful. Whatever I do only one glDrawArrays seems to show up on the frameBuffer and its usually the final one.

Some other odd things, if the rendering is working to render one (final UI object) and then I proceed to clear the depth and or stencil buffer between objects (even though depth test is disabled and depth writes are disabled and same for stencil buffer) it seems that clearing those buffers leads to nothing being observed, blank screen.

I don’t want to point fingers or make any grand conclusion about it, but it seems intentionally broken so I’m wondering is there any option to target these devices now? Do I have to use ANGLE? Many thanks in advance to any input on this topic! I’m not trying to help sell newer phones, I’m trying to sell an app that actually works on whatever devices the user has, and now it seems impossible and has (in conjunction with publishing an AAB that didn’t work at all) made my app launch pretty unsuccessful on the Android platform.

I have some ideas about it… since I can get one draw call to work maybe I can do all my rendering to some other entity (texture or non default frame buffer) then render that to the default frame buffer… but its a bit more work than I’ve committed to completing just yet and I am pretty sure I could run into a very similar issue with that approach. Thanks!

In case anyone is struggling with this, I believe I have found some breakthrough by using an older version of the NDK (I happened to make a backup before each time I upgraded this) however it still doesn’t recover all devices, but more than ever work in the amazon developer preview and firebase! I need to run another firebase test tomorrow to see if s3 is working now… no luck using a framebuffer work around on that device.

It is worth pointing out SDL 2.0.9 works just as well as 2.0.8 use to work on these devices (the issue started when I upgraded but this is not the direct cause of the issue, ran a test to verify this)… the problem was really the NDK and some other element of the Android SDK changing… there is still something slightly different in my setup from before but I think its just some gradle magic that I’m relatively powerless to comprehend or correct (whatever random binary it decides to download now will not work) or maybe its some other android SDK component I haven’t identified yet. I’m going to try to restore the SDK from a backup (no luck, maybe my backup is too new though).

I have tried the downgrade in conjunction with checkout out some of my history from when the app was working… no luck so far, so I don’t think its any of the code I’ve checked into my project…

I would really like to use the latest NDK but it seems like there is no way to do this without dropping support for older phones. I would like my app to work on new phones and old ones… I wonder… I hope I do not need to release multiple versions of the app.

Newer NDK work arounds I know about but forgot to post earlier:

  • Flickering problem (trying new app on old device will induce seizure): add a Delay AFTER rendering… (if you render again without giving the phone a chance to swap buffers you will get a bad flickering problem unless your clear color is black). SDL_Delay(33); worked for me but subtracting the time it took to render makes more sense. IME the old NDK doesn’t need this… but I only tested one device. The flickering is almost never apparent in screenshots in developer previews, but is present on real device.

If you want to avoid your app being available on old devices, maybe the best way to do that is by adding the following in the androidMainifest.xml - to only market the app on GLES 3 supporting hardware (even if your app only uses GLES 2)

<uses-feature android:glEsVersion="0x00030001" android:required="true" />

Regarding the logging macro, maybe this:

#define stringify(x) _stringify(x)
#define _stringify(x) #x
#define logGottenGlAtrib(literalAttrib) do{ \
int resultInt; \
SDL_GL_GetAttribute(literalAttrib, &resultInt); \
SDL_Log("contexts %s %i", stringify(literalAttrib), resultInt); \
} while(0)

Thats awesome! Thank you!

With regard to the rendering issue, I noticed that I did get the S3 on firebase working again as long as I only render 2d… I will have to try an NDK update soon to confirm if that is the only issue there.

I think the issue I am trying to resolve is specific to some Mali GPUs where I will either have to use the z buffer when rendering 2d at various depths, or something else to force it to recognize that my subsequent draw calls are suppose to blend with the earlier draw calls. What is happening is theoretically that as soon as I draw my background at depth N that any subsequent draw calls at that same depth will serve to replace the earlier draw call for those regions of the screen. In practice though since it renders nothing much of the time there is some other issue here.

Per-Render Target Rendering: Quick Recap

As described in my previous blogs, Mali’s hardware engine operates on a two-pass rendering model, rendering all of the geometry for a render target to completion before starting any of the fragment processing. This allows us to keep most of our working state in local memory tightly coupled to the GPU, and minimize the amount of power-hungry external DRAM accesses which are needed for the rendering process.

In my testing the issue seems specific to this device

I did stumble on the fact that someone is rewriting or reverse engineering a new Mali driver… maybe subsequent device update can fix this issue. There are still a million things to try out, I wanted to try out some vanilla SDL rendering on this device since maybe its a solved problem. My next thing to try is based on this:

" Framebuffer bindings are expensive
Changing the framebuffer binding forces immediately resolving the rendering of the current framebuffer."

So if thats the case what I would need to sprinkle between draw calls is glBindFramebuffer(GL_FRAMEBUFFER, 0); but I can’t be sure I won’t need to bind a different buffer in between to avoid it being ignored… still not sure how well the GPU would do with my <256 possibly blended draw calls though… maybe I can disable blending for some of them… the over aggressive optimization could be great if it was somehow tied to blending.

Whoever built the driver seems that they only tested one draw call with the default projection and didn’t think about blending at that projection… so its heavily optimized for 3d usage (even though it doesn’t really have enough vram for any 3d models I have tried).