Uh…dear Sammy & dude s/ttes:
I have a question concerning SDL’s keyboard event system. It seems like
SDL_GetKeyState doesn’t work unless the event queue is empty…is this
Two things are happening here. One, the keyboard state is only updated
when you pump the event loop with either SDL_PollEvent() or SDL_WaitEvent().
Two, the keyboard state is updated immediately. The net effect is
SDL_GetKeyState() shows you the current keyboard state as of the most
recent call to the SDL event functions.
I’ll take a look at the code to see if it bypasses the raw event reading
when the queue has something in it. I don’t believe that it does.
My other question is: is there a way to directly access the framebuffer
memory? I’m talking about going beyond the SDL_UpdateRect(Surface, x, x,
x, x) memory. I want to access the MIT mem itself. Is this possible?
Yes. That’s the whole idea.
OK, under X11, if you write to screen->pixels directly, you are writing to
the MIT shared memory. This is still offscreen memory however, and when
you call SDL_UpdateRect(), the X server copies it to the video RAM.
Using the DGA fullscreen extensions, it is possible to write directly to
the video memory, again by writing to screen->pixels.
If the screen surface has SDL_HWSURFACE set in the flags, then when you
write to screen->pixels, you are writing to the video memory. Note that
this is often slow and sometimes it’s hard to prevent flicker when you
The testsprite test program in the 0.9.x release of SDL shows various
ways of blitting sprites and how to detect the hardware capabilities.
Plus: under DirectX for Win32 when I draw pixels directly to
screen…they don’t look right.
Did you check the pixel format? It’s probably RGB 555 (15 bits) rather
than RGB 565.
I’m using loser-ass 16 bits.
Loser-ass 16 bits is almost twice as fast to blit as 32 bits.
But I noticed that in SDL’s blit
code there is a reference to a info->table structure. Then in the blit
code itself…there is like a surface->map? or info->map? Ok…just ask
for elaboration. See, I think the problem is being encountered because I
am taking a step beyond SDL and it’s “simplicity” he he, no offense sam
dude…but I needz da speed…my own “more versatile” framebuffer code
is needed. Is this realistic?
SDL is highly tuned, but if you can do the job better for your application,
by all means, go for it. SDL is designed to get out of your way when you
First: You need to let SDL pick the best pixel format for you.
Second: Your blit code needs to be able to convert to the available
graphics display. If your blit code only supports a few
modes, then you might have SDL set the best mode first, then
call SDL_SetVideoMode again with a mode you support. SDL will
convert for you, but again – let SDL pick the best mode first,
so no conversion needs to be done. Many people do run their
desktops in 16-bit color, because it is the best balance of
color and speed.
Third: Be sure and lock the SDL surface when you need to – use the
SDL_MUSTLOCK() macro. To get the fastest possible speed, you
may need to lock the video memory into your address space.
Note that writing to video memory is generally a bad idea with new
video cards because it defeats bus-mastering.
The next generation of DGA will be much improved, including changing the
color depth on the fly.
-Sam Lantinga (slouken at devolution.com)
Lead Programmer, Loki Entertainment Software–
Author of Simple DirectMedia Layer -