Porting SDL to new hardware

Is there any documentation explaining what it required to get SDL running on new
hardware? I’m most interested in what functions need to be implemented and what
they need to do for the video part of SDL.

Thank you!

Brian

Is there any documentation explaining what it required to get SDL
running on new hardware? I’m most interested in what functions need
to be implemented and what they need to do for the video part of SDL.

sorry i dont have the answer to your question, but i though sdl was on
everything. what hardware did you want to port to ?

mattOn Thu, 26 Apr 2007 19:41:10 +0000 (UTC) Brian Gunn wrote:

matt <mattmatteh earthlink.net> writes:

sorry i dont have the answer to your question, but i though sdl was on
everything. what hardware did you want to port to ?

I have an embedded processor I need to get it working on.

Brian

sorry i dont have the answer to your question, but i though sdl was on
everything. what hardware did you want to port to ?

I have an embedded processor I need to get it working on.

First things first: SDL probably expects to have at least a 32-bit
linear address space. I don’t know if it would survive on something like
the old MS-DOS 16-bit segmented nastiness, but no one has ever tried to
my knowledge. The library is 32 and 64-bit clean for sure, though…I
just wanted to throw that out there, in case “embedded” means “really
really underpowered.”

If your platform is running some sort of Unix system (embedded Linux is
getting popular nowadays), then chances are you can just push SDL
through a cross compiler and it’ll work out of the box…there are C
fallbacks for everything that has CPU-specific assembly code, so you
should be able to at least bootstrap it, presuming that there’s a video
API that already works with the system (like the fbcon driver in Linux,
etc).

If not, you can disable entire subsystems you don’t need (like the
joysticks, CD-ROM, etc) so they won’t get compiled in at all. In these
cases, you just don’t need to worry about them.

If there isn’t a video driver for your platform, you can take a look at
the large variety of targets in the src/video directory. If there isn’t
one that will work on your platform, then one will probably work like
your platform and thus be a decent starting point.

At the most basic level, you need to supply a framebuffer where the
application can write pixels and tell SDL what format you expect those
pixels to be in, and be able to move data from there to the screen on
request…if there’s a discrepancy, SDL can convert between the
application and your video target, so both deal with the data format
that they expect. This, of course, adds a performance hit, but even on
embedded systems where you’d want to avoid this, it’s great when
bootstrapping a new target, since you can just use all the test programs
in the test/ directory and not think much about it.

You might want to look at the src/video/dummy directory. This is a
skeleton example (it just lies and says there’s a video target
available…this was for disabling video without the application knowing
it, but it can serve as a decent skeleton source tree for a new
target…you’ll want to look at other targets too, since there are some
nuances that the dummy driver ignores.

After you get video displaying at all, you’ll probably want to look at
the blitters in src/video/SDL_blit* …there are MMX, Altivec. etc
optimized code to move and convert pixels, but you might get a good
boost at writing customized code for your CPU arch here. If you are on
an x86 chip, you’re probably already good to go here (MMX, 3DNow, SSE,
and some generic hand-tuned optimizations are all represented already).

Good luck. If this works out, please send patches so we can include them
in mainline SDL. :slight_smile:

–ryan.

is there any chance that SDL can support XBOX, playstation 3 and others,
just like it supports dreamcast?On 4/27/07, Ryan C. Gordon wrote:

sorry i dont have the answer to your question, but i though sdl was on
everything. what hardware did you want to port to ?

I have an embedded processor I need to get it working on.

First things first: SDL probably expects to have at least a 32-bit
linear address space. I don’t know if it would survive on something like
the old MS-DOS 16-bit segmented nastiness, but no one has ever tried to
my knowledge. The library is 32 and 64-bit clean for sure, though…I
just wanted to throw that out there, in case “embedded” means “really
really underpowered.”

If your platform is running some sort of Unix system (embedded Linux is
getting popular nowadays), then chances are you can just push SDL
through a cross compiler and it’ll work out of the box…there are C
fallbacks for everything that has CPU-specific assembly code, so you
should be able to at least bootstrap it, presuming that there’s a video
API that already works with the system (like the fbcon driver in Linux,
etc).

If not, you can disable entire subsystems you don’t need (like the
joysticks, CD-ROM, etc) so they won’t get compiled in at all. In these
cases, you just don’t need to worry about them.

If there isn’t a video driver for your platform, you can take a look at
the large variety of targets in the src/video directory. If there isn’t
one that will work on your platform, then one will probably work like
your platform and thus be a decent starting point.

At the most basic level, you need to supply a framebuffer where the
application can write pixels and tell SDL what format you expect those
pixels to be in, and be able to move data from there to the screen on
request…if there’s a discrepancy, SDL can convert between the
application and your video target, so both deal with the data format
that they expect. This, of course, adds a performance hit, but even on
embedded systems where you’d want to avoid this, it’s great when
bootstrapping a new target, since you can just use all the test programs
in the test/ directory and not think much about it.

You might want to look at the src/video/dummy directory. This is a
skeleton example (it just lies and says there’s a video target
available…this was for disabling video without the application knowing
it, but it can serve as a decent skeleton source tree for a new
target…you’ll want to look at other targets too, since there are some
nuances that the dummy driver ignores.

After you get video displaying at all, you’ll probably want to look at
the blitters in src/video/SDL_blit* …there are MMX, Altivec. etc
optimized code to move and convert pixels, but you might get a good
boost at writing customized code for your CPU arch here. If you are on
an x86 chip, you’re probably already good to go here (MMX, 3DNow, SSE,
and some generic hand-tuned optimizations are all represented already).

Good luck. If this works out, please send patches so we can include them
in mainline SDL. :slight_smile:

–ryan.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I’m pretty sure it’s supported on Xbox already… and I’m sure it’s
supported on Xbox Linux in some respect. As with PS3 Linux.

Though the Xbox port I dont believe is in the main branch.On Fri, 27 Apr 2007 14:00:49 -0300 Danilo Vargas wrote:

is there any chance that SDL can support XBOX, playstation 3 and
others, just like it supports dreamcast?

On 4/27/07, Ryan C. Gordon wrote:

sorry i dont have the answer to your question, but i though sdl
was on everything. what hardware did you want to port to ?

I have an embedded processor I need to get it working on.

First things first: SDL probably expects to have at least a 32-bit
linear address space. I don’t know if it would survive on something
like the old MS-DOS 16-bit segmented nastiness, but no one has ever
tried to my knowledge. The library is 32 and 64-bit clean for sure,
though…I just wanted to throw that out there, in case "embedded"
means “really really underpowered.”

If your platform is running some sort of Unix system (embedded
Linux is getting popular nowadays), then chances are you can just
push SDL through a cross compiler and it’ll work out of the
box…there are C fallbacks for everything that has CPU-specific
assembly code, so you should be able to at least bootstrap it,
presuming that there’s a video API that already works with the
system (like the fbcon driver in Linux, etc).

If not, you can disable entire subsystems you don’t need (like the
joysticks, CD-ROM, etc) so they won’t get compiled in at all. In
these cases, you just don’t need to worry about them.

If there isn’t a video driver for your platform, you can take a
look at the large variety of targets in the src/video directory. If
there isn’t one that will work on your platform, then one will
probably work like your platform and thus be a decent starting
point.

At the most basic level, you need to supply a framebuffer where the
application can write pixels and tell SDL what format you expect
those pixels to be in, and be able to move data from there to the
screen on request…if there’s a discrepancy, SDL can convert
between the application and your video target, so both deal with
the data format that they expect. This, of course, adds a
performance hit, but even on embedded systems where you’d want to
avoid this, it’s great when bootstrapping a new target, since you
can just use all the test programs in the test/ directory and not
think much about it.

You might want to look at the src/video/dummy directory. This is a
skeleton example (it just lies and says there’s a video target
available…this was for disabling video without the application
knowing it, but it can serve as a decent skeleton source tree for a
new target…you’ll want to look at other targets too, since there
are some nuances that the dummy driver ignores.

After you get video displaying at all, you’ll probably want to look
at the blitters in src/video/SDL_blit* …there are MMX, Altivec.
etc optimized code to move and convert pixels, but you might get a
good boost at writing customized code for your CPU arch here. If
you are on an x86 chip, you’re probably already good to go here
(MMX, 3DNow, SSE, and some generic hand-tuned optimizations are all
represented already).

Good luck. If this works out, please send patches so we can include
them in mainline SDL. :slight_smile:

–ryan.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Hello !

is there any chance that SDL can support XBOX, playstation 3 and others,
just like it supports dreamcast?

At the moment PS3 under Linux on PS3.
Is there a free usable SDK for XBOX and/or PS3 ?

CU

Ryan C. Gordon <icculus icculus.org> writes:

First things first: SDL probably expects to have at least a 32-bit
linear address space.

Yeah, that’s not a problem. I’ve got a 32-bit processor with an MMU running
a full 2.6 Linux kernel.

…chances are … it’ll work out of the box…presuming that there’s a video
API that already works with the system (like the fbcon driver in Linux,
etc).

Unfortunately there’s not any video API that already works with this
hardware. :frowning:

I did manage to get it somewhat working, but I still don’t understand
everything. For instance, what does UpdateRects() do? The only parameters are
an array of SDL_Rects() and the number of rects. I’ve looked at a few of the
sample drivers and I still haven’t figured out exactly what they do. The windib
driver blits from an offscreen frame buffer to the display. In my driver, I
tried doing a memcpy from the shadow pixels to the screen pixels (since I don’t
have a blit between two SW buffers). With the testsprite program, I get nothing
displayed. With testpalette, I get a crash in the memcpy. Am I on the right
track for what UpdateRects() is supposed to do?

Thanks!

Brian

Unfortunately there’s not any video API that already works with this
hardware. :frowning:

I don’t know anything about fbcon internals, but you might find it’s
easier to write a Linux fbcon driver and not make changes to SDL at all.
That might be way harder, though, I don’t really know…just wanted to
give you options for where you can dig in.

I did manage to get it somewhat working, but I still don’t understand
everything. For instance, what does UpdateRects() do? The only parameters are
an array of SDL_Rects() and the number of rects. I’ve looked at a few of the

UpdateRects assumes you have a single screen surface that was previously
configured in your SetVideoMode implementation…that surface’s
"pixels" field will be the framebuffer the app writes pixel data to (or
where SDL puts data if it had to convert between the app and your
driver)…in your UpdateRects implementation, you need to get data from
that “pixels” buffer to the screen in whatever way the hardware allows.
The rectangles are the portions of the framebuffer to put to the screen:

static void MYDRIVER_UpdateRects(_THIS, int numrects, SDL_Rect *rects)
{
for (i = 0; i < numrects; i++) {
const SDL_Rect *r = &rects[i];
// put r->w by r->h pixels, where r->x r->y is the top
// left corner, to the hardware.
}
}

–ryan.

Ryan C. Gordon <icculus icculus.org> writes:

you might find it’s easier to write a Linux fbcon driver and
not make changes to SDL at all.

That’s an interesting thought. I’ll take a look at that!

that surface’s “pixels” field will be the framebuffer the app writes
pixel data to

Okay, so does the pixels field need to be always accessible? I thought
it only had to be valid between a LockHWSurface() and UnlockHWSurface().
The way I’ve got my driver implemented right now, pixels of the main
surface is normally NULL and in LockHWSurface(), I map the HW video memory
to application memory and set pixels to that. Then the app can write directly
to video memory and call UnlockHWSurface(). (I had intended my driver to be
used for a double buffered display, so the app would actually be writting to
the display surface that’s not currently visible.) Is that not the correct
implementation? Using that model, I didn’t understand what UpdateRects() did.
I could switch it so that I allocate another frame buffer that would allow me to
use the HW blit to copy data to the visible display buffer.

Thank you again for your help!

Brian

Ryan C. Gordon <icculus icculus.org> writes:

you might find it’s easier to write a Linux fbcon driver and
not make changes to SDL at all.

That’s an interesting thought. I’ll take a look at that!

Doing it as an fbcon driver also has the advantage of letting Linux
use in for text emulation (console) over graphics mode, which may
come in handy if you want nicer fonts, room for more text, or just
need to bypass some evil BIOS super-NMI text emulation.

that surface’s “pixels” field will be the framebuffer the app
writes pixel data to

Okay, so does the pixels field need to be always accessible?

No.

I thought it only had to be valid between a LockHWSurface() and
UnlockHWSurface().

That’s why the locking functions are there. Just make sure the
SDL_MUSTLOCK macro returns “true” for surfaces that need locking, so
that SDL code that uses it will do the right thing.

The way I’ve got my driver implemented right now, pixels of the main
surface is normally NULL and in LockHWSurface(), I map the HW video
memory to application memory and set pixels to that. Then the app
can write directly to video memory and call UnlockHWSurface(). (I
had intended my driver to be used for a double buffered display, so
the app would actually be writting to the display surface that’s not
currently visible.) Is that not the correct implementation?

Yes, that seems correct to me.

Using that model, I didn’t understand what UpdateRects() did.

It should do nothing in that case. It’s only used when there is no way
to map VRAM, forcing the backend to set up a shadow software surface.

I could switch it so that I allocate another frame buffer that would
allow me to use the HW blit to copy data to the visible display
buffer.

That might be a good idea anyway. DMA blits are usually a lot faster
than software blits, as most video subsystems that provide
acceleration are optimized for using that, rather than software
rendering.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Wednesday 02 May 2007 03:54, Brian Gunn wrote:

Okay, I managed to get my port mostly working, but I did have to add a little
kludge.

My hardware’s frame buffer is in 32-bit ARGB format. But for some reason, doing
a blit always sets the alpha to 0. I stepped through the code and finally
realized it was because the alpha mask of my surface was set to 0 even though I
set my pixel format’s Amask in VideoInit() to be 0xFF000000. I traced the
problem back to SDL_VideoInit(). For some reason, the call to
SDL_CreateRGBSurface() on line 253 of SDL_video.c (SDL version 1.2.11) passes in
a 0 for the alpha mask. Once I changed this to instead pass in vformat.Amask as
the last parameter, my problem went away.

Is there something I should change in my driver to get this to work without
modifying SDL_video.c in this way?

Thanks!

Brian