Van: stephane.marchesin [mailto:stephane.marchesin at wanadoo.fr]
Verzonden: dinsdag 3 augustus 2004 20:22
Onderwerp: Re: [SDL] SDL, OpenGL, texture conversion and speed.
– SNIP –
What you’re trying to do is not very clear to me. It seems
to render to an SDL_surface and then use this surface to create an
OpenGL texture ? If so, here my bit of advice :
In short (and to anwser the next portion of your text) I have the
Currently the Virtual Jaguar/SDL project has an OpenGL "engine"
so that people can enjoy accelerated 2D blitting. But the problem
resides in how it’s implented (based upon the testgl.c example)
since it needs a conversion from a 16 bpp buffer to a 32 bpp
SDL_Display which is then converted to an OpenGL texture and
blitted to the main SDL_Display.
In fact, I use two displays in normal SDL blitting. For SDL/OpenGL
blitting I need the 3rd SDL_Display so that I can convert the 16 bpp
SDL_Display to a 32 bpp display which is then converted to an OpenGL
texture (and then blitted to the screen).
SDL_Display mainsurface is my primary display surface and is infact
the screen/window which the user sees. SDL_Display surface is my
buffer where the emulator pumps it’s graphical data. In normal SDL
blitting mode I just do a SDL_BlitSurface so that the graphical
data from surface is blitted onto the screen (=mainsurface).
For the use of OpenGL I have made a (temporary) SDL_Display called
texture (and not src as I wrote earlier) which has a 32 bpp so that
I can use the following code:
void sdlemu_draw_texture(SDL_Surface * dst, SDL_Surface * src, int
// convert color-indexed surface to RGB texture
// src = display and contains VJ's graphical data.
// dst = maindisplay and is our main window.
// texture = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 32,
// #if SDL_BYTEORDER == SDL_LIL_ENDIAN
// 0x000000FF, 0x0000FF00, 0x00FF0000, 0xFF000000);
// 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);
// where w and h is the powered by two width and height of
// display (our graphical buffer).
SDL_BlitSurface(src, NULL, texture, NULL);
// Texturemap complete texture to surface so we have free scaling
// and antialiasing
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, texture->w, texture->h,
GL_RGBA, GL_UNSIGNED_BYTE, texture->pixels);
glTexCoord2f(texcoord, texcoord); glVertex3i(0, 0, 0);
glTexCoord2f(texcoord, texcoord); glVertex3i(dst->w, 0,
glTexCoord2f(texcoord, texcoord); glVertex3i(0, dst->h,
glTexCoord2f(texcoord, texcoord); glVertex3i(dst->w,
I hope that the above code is somewhat simpler to understand what
I mean. The convertion from the 16 bpp display to the 32 bpp texture
takes a very long time because of format differences.
I hope it’s possible to somehow make this convertion go away or do
a enhanced conversion. While it works, it has one drawback. It’s
much slower then normal SDL blitting while OpenGL should be faster
since it’s accelerated (tested on Linux, Win32 and MacOS X).
First, I don’t understand why you need three buffers. OpenGL 1.2+
support texture uploading for most pixel formats. In this
case you can
upload the texture directly from the original buffer by
correct pixel format (read the glTexImage2D manpage, it
formats). Try to use the glTexSubImage2D call instead of the
glTexImage2D call when possible if you don’t fill your
texture with data
(for example when you have to pad the size to the next power
of 2), some
OpenGL drivers can take adantage of this (some others upload the full
texture again). You can also use glPixelStore to upload only the
relevant part of a surface to a texture without having to do
In fact I use two (2) buffers. display (the Virtual Jaguar graphical
buffer) and texture (for converting display to an OpenGL convertable
As a rule of thumb, the less data you need for your texture,
the upload. For example, if you only use 8bpp, try to make use of
paletted textures. If you can afford using 15/16bpp instead of 24bpp,
that’s fine too. Also, ATI cards benefit a lot from the reversed BGR
pixel format when doing texture uploads. Nvidia cards are
performance wise, to the pixel format you use.
From what I understood, OpenGL only allows 32 bpp textures and there
for I need to convert display to 32 bpp. But as I stated, this is slow.
From what I understand, you’re saying that OpenGL textures can be 16 bpp
but needs to be “converted” so that OpenGL can understand this?
In an emulator I’m working on, I’ve implemented OpenGL scaling of the
display, and 8bpp display with a paletted texture is faster
SDL with a software surface under X11. So using a texture for uploads
can be a win from a performance viewpoint if you’re careful.
The project I’m talking about is Virtual Jaguar/SDL. A portable Atari
Jaguar emulator which was ported from Visual C++/asm/DirectX (with
french commentary) to standard GNU C/C++ and SDL If you could share
some sources, I would be gratefull!
(btw I’m currently rewriting the SDL OpenGL-scaling backend
different texture formats, and thus avoid copying data around).
I’m very interested in this information! You can contact me privately
if you don’t want to share this information to the world