Bernd Kreimeier wrote:
- looked at H2 CIN_ API
- looked at SMPEG
- felt uneasy
- started thinking some more
I’ll try to put it this way: I have to write an MPEG
frame of choice into an LFB memory area that is then
set as a bitmap (DrawPixels) or texture for Mesa/OpenGL.
That is the way to get it through Glide to the VG/V2
passthrough output. I would allocate the LFB area,
and advance its update by H2 ticks.
SMPEG submits all control to SDL, which turns H2
(formerly using Smacker) kinda upside down. I’d rather
tread more lightly at this point.
Do we have a Glide/Mesa app using SMPEG already?
Myth2, but it only uses SMPEG when the X window is up, not during glide
I think I have seen this design issue before. There
should be a clear separation between import (decoding,
conversion to requested depth/size, write to location),
context (allocate/switch surface to write to), and
control (how many milliseconds, which frame). I.e.,
SMPEG_play is actually SDL_MpegPlay.
SDL seems to aim for two different levels at the same
time: the base API providing system services in a
portable way (LFB, sound device, soon OpenGL), and
a GLUT-like toolkit (with all the familiar struggles
Lessons I learned from OpenGL:
always maintain data export/import separately
never enforce data structures at API level
use primitive types and pointers exclusively
make context current without using a handle at API level
The more convenient SDL is as a toolkit for aspiring
game coders starting from scratch, the less suitable
it will be as a toolbox to retrofit an existing
Maybe that is what Michael and Karl were bitching about
recently (not that they ever volunteered specifics, ahem).
The probable solution is a two-part API, much like
GL vs. GLUT - the low level SDL providing the services,
the high level SDL offering the GLUT-like toolkit.
SDL+Mesa will find itself in direct competition with
GLUT. I think the connection is obvious.
I think you are absolutely right. My brain is fried from looking at SDL
for two years now, so any fresh perspective is welcome. I think you
will do a good job of stepping back and seeing what needs to be done
I can use the existing SMPEG source for Civ, and you can fork the source
into a development tree to test out ideas.
Just grist for the mill, here are some ideas I’ve been toying with over
the past few weeks:
Separate SDL into:
video + events
-> multiple drivers, each having one or more displays, each having a
"screen" which can be set to any resolution or bit-depth, and a set of
input devices that can be enabled: joysticks, mice, keyboards, etc.
-> multiple drivers, each having one or more sound cards which can be
placed into half or full duplex mode for output or recording
Obviously this would be a lot of work which I’m too tired to even
contemplate at the moment, but it would work really well in that the
multiple driver idea would allow us to eliminate the dynamic loading
cruft, and make the code much more reentrant. We could also do things
like handle multiple output windows, or multiple keyboard input sources
from the same application. There is also a good chance that an
appropriate low level framework could be written to make adding drivers
easy, and allow layering for people who want to get as close to the
native APIs as they want, while still providing the simple API for other
developers. I ramble, but anyway… just food for thought.
I’m CC’ing the SDL mailing list because there may be other people with
good insight. :)–
-Sam Lantinga, Lead Programmer, Loki Entertainment Software