Colorkey in the window

Hi Everyone,

I’m looking for help implementing a feature in my program. I would like to ignore the red color completely ( 255 , 0, 0) and replace it with alpha. I have found a tutorial Online this one:
Link to tutorial

The program runs but it does not ignore the red color.
I have made 2 test png.

I render them on each other so the left side of the “2.png” should be invisible and I would see the half of the “1.png”

#include <stdio.h>
#include <iostream>
#include <SDL2/SDL.h>
#include <SDL2/SDL_render.h>
#include <SDL2/SDL_image.h>
#include <SDL_opengles2.h>

// ./configure --disable-video-mir

const GLchar* vertexSource =
"varying vec2 vTexCoord;\n"
"void main(void)\n"
"   vTexCoord = gl_MultiTexCoord0.xy;\n"
"   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;\n"

const GLchar* fragmentSource =
"uniform sampler2D myTexture;\n"
"varying vec2 vTexCoord;\n"
"void main (void)\n"
"   vec4 color = texture2D(myTexture, vTexCoord);\n"
"   if (color.rgb == vec3(1.0,0.0,0.0))\n"
"      discard;\n   "
"   gl_FragColor = color;\n"

void printShaderLog( GLuint shader )
     if( glIsShader( shader ) )
	  int infoLogLength = 0;
	  int maxLength = infoLogLength;
	  glGetShaderiv( shader, GL_INFO_LOG_LENGTH, &maxLength );
	  char* infoLog = new char[ maxLength ];
	  glGetShaderInfoLog( shader, maxLength, &infoLogLength, infoLog );
	  if( infoLogLength > 0 )
	  { //Print Log
	       printf( "%s\n", infoLog );
	  } //Deallocate string
	  delete[] infoLog;
     } else
	  printf( "Name %d is not a shader\n", shader );
int main()
	if (SDL_Init(SDL_INIT_VIDEO)<0)
	    printf("SDL ERROR:%s\n",SDL_GetError());
	int numdrivers = SDL_GetNumRenderDrivers ();
	std::cout << "Render driver count: " << numdrivers << std::endl;
	for (int i=0; i<numdrivers; i++)
	     SDL_RendererInfo drinfo;
	     SDL_GetRenderDriverInfo (0, &drinfo);
	     std::cout << "Driver name ("<<i<<"): " << << std::endl;
	     if (drinfo.flags & SDL_RENDERER_SOFTWARE) std::cout << "the renderer is a software fallback" << std::endl;
	     if (drinfo.flags & SDL_RENDERER_ACCELERATED) std::cout << "the renderer uses hardware acceleration" << std::endl;
	     if (drinfo.flags & SDL_RENDERER_PRESENTVSYNC) std::cout << "present is synchronized with the refresh rate" << std::endl;
	     if (drinfo.flags & SDL_RENDERER_TARGETTEXTURE) std::cout << "the renderer supports rendering to texture" << std::endl;
	SDL_Window *SCREEN = SDL_CreateWindow("Screen",0,0,320,240,SDL_WINDOW_OPENGL);
	if (SCREEN == NULL)
		printf("SDL WINDOW:%s\n",SDL_GetError());
	if (RENDER == NULL)
		printf("SDL RENDERER:%s\n",SDL_GetError());
	SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);

	GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
	glShaderSource(vertexShader, 1, &vertexSource, NULL);
	GLint vCompiled = GL_FALSE;
	glGetShaderiv( vertexShader, GL_COMPILE_STATUS, &vCompiled );
	if( vCompiled != GL_TRUE )
	     printf( "Unable to compile vertex shader %d!\n", vertexShader );
	     printShaderLog( vertexShader );
	     return false;
	GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
	glShaderSource(fragmentShader, 1, &fragmentSource, NULL);

	vCompiled = GL_FALSE;
	glGetShaderiv( fragmentShader, GL_COMPILE_STATUS, &vCompiled );
	if( vCompiled != GL_TRUE )
	     printf( "Unable to compile fragment shader %d!\n", fragmentShader );
	     printShaderLog( fragmentShader );
	     return false;
	GLuint shaderProgram = glCreateProgram();
	glAttachShader(shaderProgram, vertexShader);
	glAttachShader(shaderProgram, fragmentShader);

	SDL_Texture *img1 = NULL;
	SDL_Texture *img2 = NULL;
	img1 = IMG_LoadTexture(RENDER, "1.png");
	img2 = IMG_LoadTexture(RENDER, "2.png");
	SDL_Rect texr;
	texr.x = 0;
	texr.y = 0;
	texr.w = 320;
	texr.h = 240; 
	int i = 100;

	while (i > 0)
		SDL_Event e;
		while (SDL_PollEvent(&e)) {}
		SDL_RenderCopy(RENDER, img1, NULL, &texr);
		SDL_RenderCopy(RENDER, img2, NULL, &texr);


SDL_CXXFLAGS := $(shell sdl2-config --cflags)
SDL_LDFLAGS := $(shell sdl2-config --libs)

CXX ?= g++
CXXFLAGS += -Wno-write-strings -g $(SDL_CXXFLAGS)
LDFLAGS := $(SDL_LDFLAGS) -lEGL -lGLESv2 -lSDL2_ttf -lSDL2_image -lSDL2_gfx 

all : videotest

OBJS := main.o

videotest : $(OBJS)
	$(CXX) -o videotest  $(OBJS) $(LDFLAGS)  

$(OBJS) : %.o : %.cpp
	$(CXX) $(CXXFLAGS) -o $@ -c $<

	rm -f $(OBJS) videotest

I’m using SDL2 2.0.5 and Ubuntu 17.04
I’m looking forward to your answers
Many thanks

It looks like you’re trying to load shader code in the renderer’s OpenGL context. Note that this is not recommended because the opengl and opengles2 render drivers use shaders themselves to do some texture format conversions. It’s possible to override them, but it will be necessary to replicate the behavior of those shaders for specific texture formats or the result won’t look as expected (see source of the opengl and opengles2 renderers). If you’re not going to encounter those formats you may get away with ignoring this.

The SDL renderers assume that no one is meddling with the context and they cache some values. This includes what shader has been set by itself. The first use of SDL_RenderCopy will set the appropriate shader for the texture. If that format doesn’t change, it won’t reset the shader and you would be free to set your own. Since this is an internal SDL thing, this behavior may change at any new version. It’s really not recommended to access the context of the renderer like this.

Also note that the first renderer you’re getting (when calling SDL_CreateRenderer with the second argument as -1) is probably the opengl renderer and not opengles2.

To get your example to work do the following:

  • Do not create your own OpenGL context. You want the code to be loaded into the renderer’s context which will be current after the SDL_CreateRenderer call. Just comment out or delete the call to SDL_GL_CreateContext.
  • Call glUseProgram after the first call to SDL_RenderCopy. As explained above, SDL sets the shader for the texture format and won’t change it unless it encounters another. You can override it with your own after that.
  • It’s probably better to use SDL_RenderPresent instead of SDL_GL_SwapWindow since the first one will trigger renderer specific stuff.
  • Small bug:
    SDL_GetRenderDriverInfo (0, &drinfo);
    should be
    SDL_GetRenderDriverInfo (i, &drinfo);

I do not have much experience with this. You may need to look further at the SDL code to understand the renderer behavior and work around it. Again, all of this may change with the next version of SDL.

Going pure OpenGL may be less frustrating than dancing around the behavior of the SDL renderers.

1 Like

It’s possible that SDL_gpu would be helpful, if you’re looking for interoperability with shaders. There’s a simple-shader demo that shows how to put together shaders and their data:

That demo also uses unversioned shaders for a bit more compatibility, so if you don’t need that, you can replace calls to load_shader with GPU_LoadShader.

Hi there,

Thank you very much for the detailed explanation and the good thing it is working in OpenGL !!
Well as you have seen I try to make this work in OpenGL ES2.

I have applied the quick bugfix what you have pointed out and the OpenGL ES2 driver is (1) in my case so I have modified the renderer creation and also added

SDL_SetHint(SDL_HINT_RENDER_DRIVER, "opengles2");

Forcing OpenGL ES2


Unfortunately the vertex shader code has issues

0:4(14): error: `gl_MultiTexCoord0' undeclared
0:4(14): error: type mismatch
0:5(16): error: `gl_ModelViewProjectionMatrix' undeclared
0:5(47): error: `gl_Vertex' undeclared
0:5(16): error: operands to arithmetic operators must be numeric 

I will try to lookup and search for solution how to port the code.
Thank you again