By the looks of the ingame screenshot, there should be no need for
per-pixel effects. There doesn’t even seem to be any antialiazing
going on, which simplifies matters further, as you don’t have to
worry about the “read from VRAM” issue, where SDL does software alpha
blending on a hardware display surface.
Anyway, I’ll try and have a look at the code later.
Meanwhile, you might want to try Kobo Deluxe on those machines.
(Without glSDL/OpenGL, obviously!) It should generate a similar
rendering load to PacDude (entire playfield view needs updating every
frame, as I understand it?), except that Kobo Deluxe normally uses
alpha blending for antialiazing of sprites. (RLE accelerated, with
rendering done into a software shadow buffer regardless of video
settings.)
Use some resolution that gets the playfield view about the same size
as the PacDude window. Kobo Deluxe only updates that part of the
screen at the full frame rate, so the total window size is
irrelevant.
Note that although Kobo Deluxe uses alpha blending only around the
edges of sprites, it can still have some impact on slow CPUs. Try
disabling alpha blending in the configuration, or by using
the -noalpha command line switch, to see if it makes a difference.
Also be warned that the 0.5.x versions have new graphics and
potentially a bunch of brand new bugs. (Quite a few have tried these
versions already though, but you never know…) 0.5.x may be slower
than the 0.4.x versions on a particular machine - or faster, for that
matter. After all, it’s a game - not a benchmark. 
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Sunday 24 August 2008, Peter Mackay wrote:
Going by recent posts, my first guess would be doing something
expensive per pixel.
If you’re doing a lot of per pixel effects, you may be much better
off using OpenGL instead of manualy poking pixels. Thus letting the
hardware do the work instead of having lots of special case code to
deal with different bit depths.