Can SDL and Linux really be used for high speed games?

I just got SDL 1.1 off the CVS and I had no trouble running the install
and getting things to compile (a lot less trouble than I had with DirectX
anyway). I’ve also gotten the raptor demo to work just fine on my system. For
the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0 with a
TNT 1 and 64 megs of system ram on a p200mmx. I run kde and the kwm as well.
In the sprite test demo I get between 15-20 fps out of it with noticible slow
downs and speed ups. Raptor seems sluggish to me, and I think I detect some
sever frame rate drops, but the music chugs along just fine (no clicking or
poping).

To be fair I've gotten used to playing high speed overhead shooters on

my old Sega Saturn and Sony Playstation. Stuff like Darius Gaiden and
RayStorm. I’m left wonderig if these sort of games are possible on the linux
platform. I do have a heck of a lot running in the backgound. But then again I
leave a lot running when I play games in windows with no noticible loss of
performance. Is this an issue with SDL and how good a job it does claiming
resources from the system? I keep thinking back to that spritetest demo. It
doesn’t seem like that far a strech to ask for 100 simple objects to run about
the screen with no AI or user input to worry about. At least not on a 200mhz
system. Maybe I’m just asking for too much, or maybe the games I’ve been
playing and using as a point of reference use tricks I don’t know about
(probably). I’ve heard about a guy using linux boot disks and SDL to make a
complete game operating system, essentially turing any PC into a ready to go
game box. If that works out it might be something big. I’d love to hear
people’s thoughts on SDL as a high speed game platform, and if you managed
to make it this far thanks for listening.–
Jeremy Gregorio
jgreg at azstarnet.com

On Mittwoch 18 Oktober 2000 21:28 I received your message:

I just got SDL 1.1 off the CVS and I had no trouble running the install
and getting things to compile (a lot less trouble than I had with DirectX
anyway). I’ve also gotten the raptor demo to work just fine on my system.
For the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0
with a TNT 1 and 64 megs of system ram on a p200mmx.
I run kde and the kwm
as well. In the sprite test demo I get between 15-20 fps out of it with
noticible slow downs and speed ups. Raptor seems sluggish to me, and I
think I detect some sever frame rate drops, but the music chugs along just
fine (no clicking or poping).

The SDL-Example “testsprite.c” isn’t quite a good example for SDL-Performance
(Sorry Sam, I do not want to “pull it down” (was that propper english?), but
it is, as it says just an example of how one could solve sprite output). This
is in my oppinion mainly because of follwoing reason:

The software doesn’t really check what is needed to update, because there is
no cross-check for overlapping rectangles. So if sprite[0] is put out at
(10,10) and sprite [10] is put out at (10,10) you would only need one rect to
be updated for those two (10,10,sprite->w,sprite->h). This would be cheaper
to be checked than an double update rects if I am calculating right. The same
would be if you have a sprite[2] at (100,100) and a sprite[5] at (105,105).
There it would be cheaper to update a rect of
(100,100,sprite->w+5,sprite->h+5). As long you do not run the sprite demo
with hardware acceleration (there is SDL_Flip(screen) used) this wouldn’t
perform too good. Then it depends if your driver supports hardware
acceleration for colorkeying and blitting. This could make the problem of not
“cross-checking” the update-array worse.

testsprite.c example is, as it says just an example. Do not mix up an example
(it should just show one possible solution) with a performance test or a
benchmark. :slight_smile: As discussed earlier in this list some people did benchmark
original windows games with SDL-based ports on linux and found nearly no
difference between those.

Greetings,

Sascha

OK, Well, considering SDL is used in just about all of (if not completely all of)
Loki Games’ ports of Windows games to Linux, I’d say its pretty much completely
adequate for high-speed games.

I’ve got a TNT 1 as well, 128 of RAM, on a 450 mhz PII. Everything runs perfectly
satisfactory for me.

There are some factors here to look at:

  1. Since you did grab a CS version, there could have been a recently introduced
    bug, although I very seriously much doubt that; but its possible.

  2. KDE and kwm eat up crud loads of memory. 64 megs should be enough, but you
    might want to run a system info program to see how much free memory you have.

  3. Drivers. If you’re not running XFree86 4.x with the latest nVidia drivers,
    don’t expect much out of that TNT card in Linux. The standard nVidia drivers that
    come with XFree86 4.0 don’t do much.

  4. CPU usage. Running programs in the background shouldn’t hurt performance, as
    you said, unless they’re active often. Most programs are going to just sit there
    and do nothing. Some will be near constantly or constantly eating up resources.
    Again, use a sys info program (like gtop) to check for this. Or whatever the KDE
    version of top, since you said that’s what you’re using.

Also, you are right about those “tricks” games use. A program that repeatedly
blits lots of sprites to the screen is going to run pretty damn slow. You’d be
surprised how much data that can all add up to be that has to be pushed over the
bus every second. Things like whether the sprites are stored in graphics memory
or system memory, whether sprites that don’t move are being redrawn, if the whole
screen is being cleared each frame, etc. can make huge differences, even on
high-end systems.

Jeremy Gregorio wrote:> I just got SDL 1.1 off the CVS and I had no trouble running the install

and getting things to compile (a lot less trouble than I had with DirectX
anyway). I’ve also gotten the raptor demo to work just fine on my system. For
the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0 with a
TNT 1 and 64 megs of system ram on a p200mmx. I run kde and the kwm as well.
In the sprite test demo I get between 15-20 fps out of it with noticible slow
downs and speed ups. Raptor seems sluggish to me, and I think I detect some
sever frame rate drops, but the music chugs along just fine (no clicking or
poping).

    To be fair I've gotten used to playing high speed overhead shooters on

my old Sega Saturn and Sony Playstation. Stuff like Darius Gaiden and
RayStorm. I’m left wonderig if these sort of games are possible on the linux
platform. I do have a heck of a lot running in the backgound. But then again I
leave a lot running when I play games in windows with no noticible loss of
performance. Is this an issue with SDL and how good a job it does claiming
resources from the system? I keep thinking back to that spritetest demo. It
doesn’t seem like that far a strech to ask for 100 simple objects to run about
the screen with no AI or user input to worry about. At least not on a 200mhz
system. Maybe I’m just asking for too much, or maybe the games I’ve been
playing and using as a point of reference use tricks I don’t know about
(probably). I’ve heard about a guy using linux boot disks and SDL to make a
complete game operating system, essentially turing any PC into a ready to go
game box. If that works out it might be something big. I’d love to hear
people’s thoughts on SDL as a high speed game platform, and if you managed
to make it this far thanks for listening.


Jeremy Gregorio
jgreg at azstarnet.com

In the sprite test demo I get between 15-20 fps out of it with noticible slow
downs and speed ups. Raptor seems sluggish to me, and I think I detect some
sever frame rate drops, but the music chugs along just fine (no clicking or
poping).

X11 is unfortunately a fairly slow target for SDL at the moment, since
there is no acceleration at all and the transport to vidmem is almost
invariably slow as molasses. Some of this can be remedied by a better
X server implementation, but there’s not much to do about the lack of
acceleration and vertical refresh synchronisation without transcending
the basic Xlib API.

To be fair I’ve gotten used to playing high speed overhead shooters on
my old Sega Saturn and Sony Playstation. Stuff like Darius Gaiden and
RayStorm. I’m left wonderig if these sort of games are possible on the linux
platform.

(I agree, nothing beats a good shmup!)

OK, here’s the problem: Here’s a PC. It has a fast to very fast processor,
a moderately to insanely fast graphics accelerator, gobs of RAM that’s
very slow to slow, a fairly slow but huge hard disk, etc.

We want to make a game for it. We can easily attain console-like
performance if we program it the same way consoles are programmed:
close to the hardware, asm where required, and an assumption that
all boxes look alike, so we know exactly what hardware to program for.

But we can’t do that. First of all, PCs differ wildly in hardware, and
we want our program to be a good citizen so it has to use the drivers
and abstractions the OS provides. We also want it portable to other
OSes and machines, with different endianness and word length.
And we want to be able to take advantage of future hardware, so making
things portable can often be a higher performance improvement than
optimizing for a particular machine.

This is where SDL comes in. The task is hopeless but I dare say it does
rather well after all, given the constraints. Help us improve it.

I do have a heck of a lot running in the backgound. But then again I
leave a lot running when I play games in windows with no noticible loss of
performance.

I know little about Windows, but I can imagine that its scheduler is
more tuned for a single application and user at a time than the
multi-user time-sharing foundations of Unix. Of course, you can do the
same in Unix - renice your game to high priority or even giving it
real-time priority, effectively allowing it to monopolize the CPU
(assuming you have root privileges), but what if I have a background
job that I would like to have finished this side of Christmas, or if
others are still logged in to my box, or if someone sends me a mail?

Rather than making false guesses, the kernel decides to play it safe,
and if you want faster response, quit Netscape. (It was netscape,
right?)

To sum it up: If you are not satisfied with the SDL performance with
your next shootemup game, identify and the problem and everyone will
be happy :slight_smile:

Just have to comment on a shmup thread :slight_smile:

Mattias Engdegard wrote:

In the sprite test demo I get between 15-20 fps out of it with noticible
slow
downs and speed ups. Raptor seems sluggish to me, and I think I detect
some
sever frame rate drops, but the music chugs along just fine (no clicking
or
poping).

It’s already a given that your configuration was a bit icky, SDL in
windows or a properly set up linux box is pretty fast (I used to use
it for development on a 486 laptop)

X11 is unfortunately a fairly slow target for SDL at the moment, since
there is no acceleration at all and the transport to vidmem is almost
invariably slow as molasses. Some of this can be remedied by a better
X server implementation, but there’s not much to do about the lack of
acceleration and vertical refresh synchronisation without transcending
the basic Xlib API.

To be fair I’ve gotten used to playing high speed overhead shooters on
my old Sega Saturn and Sony Playstation. Stuff like Darius Gaiden and
RayStorm. I’m left wonderig if these sort of games are possible on the
linux
platform.

(I agree, nothing beats a good shmup!)

I agree to :), big problem is that those consoles were designed to
get killer performance from sprite stuff, PC’s were’nt. It can be
coerced though (At least I hope so or my R-Type Delta inspired shooter
may never get off the ground :/)

OK, here’s the problem: Here’s a PC. It has a fast to very fast processor,
a moderately to insanely fast graphics accelerator, gobs of RAM that’s
very slow to slow, a fairly slow but huge hard disk, etc.

We want to make a game for it. We can easily attain console-like
performance if we program it the same way consoles are programmed:
close to the hardware, asm where required, and an assumption that
all boxes look alike, so we know exactly what hardware to program for.

But we can’t do that. First of all, PCs differ wildly in hardware, and
we want our program to be a good citizen so it has to use the drivers
and abstractions the OS provides. We also want it portable to other
OSes and machines, with different endianness and word length.
And we want to be able to take advantage of future hardware, so making
things portable can often be a higher performance improvement than
optimizing for a particular machine.

This is where SDL comes in. The task is hopeless but I dare say it does
rather well after all, given the constraints. Help us improve it.

I do have a heck of a lot running in the backgound. But then again I
leave a lot running when I play games in windows with no noticible loss of
performance.
I’ll assume that copy of windows had directx installed, that’s much
faster hardware interfacing than default Xfree4.0 TNT drivers.

I know little about Windows, but I can imagine that its scheduler is
more tuned for a single application and user at a time than the
multi-user time-sharing foundations of Unix. Of course, you can do the
same in Unix - renice your game to high priority or even giving it
real-time priority, effectively allowing it to monopolize the CPU
(assuming you have root privileges), but what if I have a background
job that I would like to have finished this side of Christmas, or if
others are still logged in to my box, or if someone sends me a mail?

Rather than making false guesses, the kernel decides to play it safe,
and if you want faster response, quit Netscape. (It was netscape,
right?)

To sum it up: If you are not satisfied with the SDL performance with
your next shootemup game, identify and the problem and everyone will
be happy :slight_smile:

Yeah, help make all of us small shmup developers lives easier like
mattias did when he rewrote all of SDL’s video functions, and rewrote
them again, and again, and…

:slight_smile:

Wesley Poole
AKA Phoenix Kokido
Tired of hiding behind a on-line only identity…
members.xoom.com/kokido

Your question seems to be more about Linux than SDL.

Linux does all the process scheduling and other fun stuff that OS’s do to
make things run a certain way.

Drivers also have a lot to do with why games run as well as they do.

The Detonator NVidia drivers are wonderful whereas if you have no
3daccelerator support you are basically SOL in performance.

XFree86 <the graphics engine?> is also an important player.

Its not as cut an clean as it will ever be on a console system with fixed
hardware. Its just not possible for me to write a game for my video card
only and expect everyone else to go get it.

Catch my drift?

Dave> ----- Original Message -----

From: jgreg@azstarnet.com (Jeremy Gregorio)
To:
Sent: Wednesday, October 18, 2000 1:03 PM
Subject: [SDL] Can SDL and Linux really be used for high speed games?

I just got SDL 1.1 off the CVS and I had no trouble running the install
and getting things to compile (a lot less trouble than I had with DirectX
anyway). I’ve also gotten the raptor demo to work just fine on my system.
For
the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0
with a
TNT 1 and 64 megs of system ram on a p200mmx. I run kde and the kwm as
well.
In the sprite test demo I get between 15-20 fps out of it with noticible
slow
downs and speed ups. Raptor seems sluggish to me, and I think I detect
some
sever frame rate drops, but the music chugs along just fine (no clicking
or
poping).

To be fair I’ve gotten used to playing high speed overhead shooters on
my old Sega Saturn and Sony Playstation. Stuff like Darius Gaiden and
RayStorm. I’m left wonderig if these sort of games are possible on the
linux
platform. I do have a heck of a lot running in the backgound. But then
again I
leave a lot running when I play games in windows with no noticible loss of
performance. Is this an issue with SDL and how good a job it does claiming
resources from the system? I keep thinking back to that spritetest demo.
It
doesn’t seem like that far a strech to ask for 100 simple objects to run
about
the screen with no AI or user input to worry about. At least not on a
200mhz
system. Maybe I’m just asking for too much, or maybe the games I’ve been
playing and using as a point of reference use tricks I don’t know about
(probably). I’ve heard about a guy using linux boot disks and SDL to make
a
complete game operating system, essentially turing any PC into a ready to
go
game box. If that works out it might be something big. I’d love to hear
people’s thoughts on SDL as a high speed game platform, and if you managed
to make it this far thanks for listening.


Jeremy Gregorio
jgreg at azstarnet.com

the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0 with a
TNT 1 and 64 megs of system ram on a p200mmx. I run kde and the kwm as well.
In the sprite test demo I get between 15-20 fps out of it with noticible slow

uhm… at 1600x1200 on a duron 600 w/382MB RAM and an ATI something or
other, XF86 3.3, red hat 6.2, i get > 120 fps @ 100% cpu utilization. it
looks CPU-bound to me.

and yes, platforms have a lot of tricks for speeding up graphics,
especially since most of them have very low mhz ‘main’ processors.On Wed, 18 Oct 2000, Jeremy Gregorio wrote:


Blue Lang, Unix Voodoo Priest
202 Ashe Ave, Apt 3, Raleigh, NC. 919 835 1540
“… where a child can walk in and have their heart turn dark as a result
of being on the Internet…” - George W Bush, for Joseph Conrad

Wed, 01 Nov 2000 Dave Leimbach wrote:

Your question seems to be more about Linux than SDL.

Yep. I’ve ported some test code from DirectX to svgalib and GGI, and both of
the Linux variants are painfully slow no matter what I try… The sprite engine
I ported can easily do several hundred FPS when runnnig in RAM, but there’s
just no way to get the data displayed with more than some 30-50 fps, and that’s
on a Celeron 333 with a Permedia 2 based AGP card. It seems like something
slows the AGP bus down, or something… Poor chipset support? (Same problem
with other machines - not tied to that mainboard.)

(This AGP/PCI bus problem exists on Windows too, of course, but it’s not a big
problem. 90 FPS+ is no problem on the same machine; 60 FPS is possible on a
200 MHz K6 machine with an S3 Virge PCI card.)

Linux does all the process scheduling and other fun stuff that OS’s do to
make things run a certain way.

SCHED_FIFO easily drives a periodic task at 1000+ Hz, and with the lowlatency
patch, it’s even rock solid hard RT of nearly “control engineering” quality.
That’s certainly not a problem here.

Drivers also have a lot to do with why games run as well as they do.

Indeed, but we’re talking about direct access of the VRAM. (Write-only, of
course - reading is strictly forbidden, as it’s dog slow an almost any card!)

The Detonator NVidia drivers are wonderful whereas if you have no
3daccelerator support you are basically SOL in performance.

This is one reason why I’m going to use OpenGL for my next 2D engine; even on
Windows, that seems to be the only way to get real performance. (My new P-III
933 doesn’t help software rendering much… sob)

XFree86 <the graphics engine?> is also an important player.

How about 2.4.0 (+ agpgart) with XFree86 4.0.1 and Millennium G400 MAX? I have
yet to set it up on my new machine… 4.0.1 works, and it’s a great deal faster
than 3.3.x, but I haven’t even had time to try some games or test programs yet.

Is there a chance this configuration can enable faster blitting into VRAM?
(For software rendering, that is.) I’ve seen nothing near the transfer rate of
the AGP bus so far, on any Linux system…

Its not as cut an clean as it will ever be on a console system with fixed
hardware. Its just not possible for me to write a game for my video card
only and expect everyone else to go get it.

Indeed. This is a very sad part of the PC + 3D accelerator evolution - even
though average game PCs outperform most consoles, they still perform worse
in many cases.

I simply don’t think sub video frame rate, non-retrace-synced displays cut it
for scrolling 2D action games. This goes for Windows as well; most games don’t
bother to deal with the refresh rate vs. animation/scrolling speed issues.
However, at least it’s possible to achieve good results there. The problem
is that I hate Windoze programming, so… what to do?

Is it realistic to fix Linux, or are we looking at some very serious design
problems? (I know about the basic issues with X and kernel drivers, but this
bandwidth problem seems to be on a lower level, as it hits svgalib just as
hard.)

David Olofson
Programmer
Reologica Instruments AB
david.olofson at reologica.se

…- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
------------> http://www.linuxdj.com/mucos -' | Open Source Advocate | ..- A u d i a l i t y ------------------------. | Singer | | Rock Solid Low Latency Signal Processing | | Songwriter | —> http://www.angelfire.com/or/audiality -’ `-> david at linuxdj.com -’

Wed, 01 Nov 2000 Blue Lang wrote:> On Wed, 18 Oct 2000, Jeremy Gregorio wrote:

the record I’m running redhad 6.1 with kernal 2.2.5-15 and XFree86 4.0 with a
TNT 1 and 64 megs of system ram on a p200mmx. I run kde and the kwm as well.
In the sprite test demo I get between 15-20 fps out of it with noticible slow

uhm… at 1600x1200 on a duron 600 w/382MB RAM and an ATI something or
other, XF86 3.3, red hat 6.2, i get > 120 fps @ 100% cpu utilization. it
looks CPU-bound to me.

With the -fast switch, I get 57 FPS; without any switches (windowed), I get 45
FPS. This is on my 400 MHz P-II at work; 128 MB RAM, 1280x1024 (32 bpp),
XFree86 3.3.6, RedHat 7.0 (upgraded) with a 2.2.10-lowlatency kernel. The video
card is a Mach64 based ATI card on AGP.

David Olofson
Programmer
Reologica Instruments AB
david.olofson at reologica.se

…- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
------------> http://www.linuxdj.com/mucos -' | Open Source Advocate | ..- A u d i a l i t y ------------------------. | Singer | | Rock Solid Low Latency Signal Processing | | Songwriter | —> http://www.angelfire.com/or/audiality -’ `-> david at linuxdj.com -’

Is there a chance this configuration can enable faster blitting into VRAM?
(For software rendering, that is.) I’ve seen nothing near the transfer rate of
the AGP bus so far, on any Linux system…

We’ve had the discussion here many times before, and the basic fact is
that if nobody is willing to do the work to provide fast memory->vidmem
transfers through DMA or bus-mastering, nothing will happen.

I simply don’t think sub video frame rate, non-retrace-synced displays cut it
for scrolling 2D action games. This goes for Windows as well; most games don’t
bother to deal with the refresh rate vs. animation/scrolling speed issues.
However, at least it’s possible to achieve good results there. The problem
is that I hate Windoze programming, so… what to do?

Page-flipping usually synchronizes to vertical refresh, and SDL
supports that. You can also store stuff in video memory (hardware surfaces,
in SDL parlance) to reduce bandwidth requirements. Fbcon, svgalib
and DGA2 should be able to use these two concepts. X11 itself doesn’t,
unless someone writes it

Is it realistic to fix Linux, or are we looking at some very serious design
problems? (I know about the basic issues with X and kernel drivers, but this
bandwidth problem seems to be on a lower level, as it hits svgalib just as
hard.)

I have never seen svgalib being used with anything but memcpy-style
transfers to video memory, which should be considerably slower than
direct transfer even if the MTRR are set to write-combining for the
vidmem area and judicious amount of prefetching hints are used.
I’d like to see a comparison though

We’ve had the discussion here many times before, and the basic fact is
that if nobody is willing to do the work to provide fast memory->vidmem
transfers through DMA or bus-mastering, nothing will happen.

My concern for using SDL in high speed games is not so much with
fast memory copies as with support for hardware acceleration. Most
cards now support some form of hardware accelerated blit from video
RAM to screen. This is generally much faster than manually copying
the data… sometimes an order of magnitude faster. I can see the
difference when I test my GridSlammer code on Windows with DirectX
versus Linux with SDL on the same hardware. I might get 70 fps on
Windows but only 29 fps on Linux.

My question is: will SDL take advantage of hardware acceleration
when it is available in the underlying video driver? Or will
it always use its own blit code? If I have fbcon (or whatever)
code that supports 2D hardware acceleration… what do I need
to do so SDL takes advantage of it?

Thanks,

Thad Phetteplace

My question is: will SDL take advantage of hardware acceleration
when it is available in the underlying video driver? Or will

Of course. It just doesn’t exist right now in X11 (Pierre Phaneuf is
usually a trove of knowledge on these subjects). I should say it
doesn’t exist explicitly–won’t some X11 drivers cache pixmaps on
board and then blit from VRAM to backbuffer that way?

it always use its own blit code? If I have fbcon (or whatever)
code that supports 2D hardware acceleration… what do I need
to do so SDL takes advantage of it?

There are fbcon drivers for Matrox and 3Dfx cards. Build SDL with
fbcon support (default), and run an app at the terminal. It will try
to use X, fail, then try fbcon, and hopefully succeed. Poke around in
the fbcon driver source to see what’s being accelerated, and get some
docs and write up new drivers or improve the existing ones.

If it’s not obvious by now, I’m a 3D guy :slight_smile:

m.On Fri, Nov 03, 2000 at 10:48:09AM -0600, Thad Phetteplace wrote:


Programmer “Ha ha.” “Ha ha.” “What are you laughing at?”
Loki Software “Just the horror of being alive.”
http://lokigames.com/~briareos/ - Tony Millionaire

My question is: will SDL take advantage of hardware acceleration
when it is available in the underlying video driver? Or will

Of course. It just doesn’t exist right now in X11 (Pierre Phaneuf is
usually a trove of knowledge on these subjects). I should say it
doesn’t exist explicitly–won’t some X11 drivers cache pixmaps on
board and then blit from VRAM to backbuffer that way?

Yes, but not shared memory pixmaps. I’ve toyed with non-local video
memory sprites, and the speedup is considerable, but any pixel access
at all completely slows things down as the affected area needs to be
retrieved over the X11 wire protocol. The real solution here is to
use DGA, and I think I have it working without root priveliges if you
have the frame buffer console enabled, but I need more test platforms…

There are fbcon drivers for Matrox and 3Dfx cards. Build SDL with
fbcon support (default), and run an app at the terminal. It will try
to use X, fail, then try fbcon, and hopefully succeed. Poke around in
the fbcon driver source to see what’s being accelerated, and get some
docs and write up new drivers or improve the existing ones.

That’s correct. If you ask for SDL_HWSURFACE and get it (possible with
the DirectX, DGA, and framebuffer console drivers), the screen pixels
pointer points directly to display VRAM. If that is true, you can also
create other surfaces with the SDL_HWSURFACE flag, and they will be placed
in hardware memory if possible as well, and blits between them and solid
fills will be hardware accelerated.

See ya!
-Sam Lantinga, Lead Programmer, Loki Entertainment Software> On Fri, Nov 03, 2000 at 10:48:09AM -0600, Thad Phetteplace wrote:

Yes, but not shared memory pixmaps. I’ve toyed with non-local video
memory sprites, and the speedup is considerable, but any pixel access
at all completely slows things down as the affected area needs to be
retrieved over the X11 wire protocol.

Pierre suggested the idea some while ago, but it wasn’t pursued since
shaped blits were so slow then. If they are faster now in XFree 4.x,
it would be worth taking up again. Slow pixel access is only a problem
if you need it, and many games don’t need it (or at least not often).
Stuff like alpha blits will still be slow of course, but the same
condition applies (don’t use it and it will be fast; use it and it will
work).

It wouldn’t replace the main X11 driver but could be an interesting
alternative that’s faster for some applications.

The real solution here is to
use DGA, and I think I have it working without root priveliges if you
have the frame buffer console enabled, but I need more test platforms…

How does that work, at the low level? By using a kernel driver that
exposes a mmapped window of the video memory?

Sam said:

That’s correct. If you ask for SDL_HWSURFACE and get it (possible with
the DirectX, DGA, and framebuffer console drivers), the screen pixels
pointer points directly to display VRAM. If that is true, you can also
create other surfaces with the SDL_HWSURFACE flag,

How do you do this when loading images from disk, for example? >:^)

-bill!

“William Kendrick” wrote in message
news:200011032006.eA3K6W106382 at sonic.net

Sam said:

That’s correct. If you ask for SDL_HWSURFACE and get it (possible with
the DirectX, DGA, and framebuffer console drivers), the screen pixels
pointer points directly to display VRAM. If that is true, you can also
create other surfaces with the SDL_HWSURFACE flag,

How do you do this when loading images from disk, for example? >:^)

SDL_DisplayFormat.–
Rainer Deyke (root at rainerdeyke.com)
Shareware computer games - http://rainerdeyke.com
“In ihren Reihen zu stehen heisst unter Feinden zu kaempfen” - Abigor

How do you do this when loading images from disk, for example? >:^)

SDL_DisplayFormat.

Any special args I need? Or does it just do it, if the display is
set for HWSURFACE?

-bill!

Fri, 03 Nov 2000 Mattias Engdeg?rd wrote:

Is there a chance this configuration can enable faster blitting into VRAM?
(For software rendering, that is.) I’ve seen nothing near the transfer rate of
the AGP bus so far, on any Linux system…

We’ve had the discussion here many times before, and the basic fact is
that if nobody is willing to do the work to provide fast memory->vidmem
transfers through DMA or bus-mastering, nothing will happen.

Ok. That is, system RAM surfaces aren’t implemented in 3D accelerated drivers
either? (Some cards can do that, and it’s usually supported under Windoze,
AFAIK.)

I simply don’t think sub video frame rate, non-retrace-synced displays cut it
for scrolling 2D action games. This goes for Windows as well; most games don’t
bother to deal with the refresh rate vs. animation/scrolling speed issues.
However, at least it’s possible to achieve good results there. The problem
is that I hate Windoze programming, so… what to do?

Page-flipping usually synchronizes to vertical refresh, and SDL
supports that.

Does it synchronize only the flip operation, or does it also block until the
flip has been performed? Subtle (?) but fundamental difference. (I’ve browsed
earlier threads on this, and got the impression that SDL doesn’t support the
latter. What did I miss?)

You can also store stuff in video memory (hardware surfaces,
in SDL parlance) to reduce bandwidth requirements. Fbcon, svgalib
and DGA2 should be able to use these two concepts. X11 itself doesn’t,
unless someone writes it

Hmm… browsing the 1.1.6 source, looking only for raster sync code

I found that the fbdev code uses an ioctl() that supposedly does the same
thing. (BTW, is that define in by default?)

The svgalib code has a NOP for FlipHWSurface(), but does the sync in
LockHWSurface instead, but only on double buffered displays. This should work
for me, but the semantics are different from fbcon, unless I’m missing
something on a higher level in the code.

As for DX5/Win32, DDFLIP_WAIT seems to be used at all times when flipping. (The
lock function should behave like svgalib, I’m guessing after a quick glance.)

The X11 code seems to have a LockHWSurface function that’s similar to that of
svgalib/double buffered - there is an XSync() call that (probably) will be
made as a result of the kind of engine loop I’m thinking of. However, as X11
doesn’t support double buffering (that’s the probem if I understand it
correctly), this is quite irrelevant, unless the blits are faster than the CRT
beam. (Which they should be on decent machines, but…)

Finally, DGA seems to sync as expected, and it seems like some other video
subsystems which I haven’t programmed for, or even seen, also do the Right
Thing (albeit not always in a nice way, but that’s everything from hard to
impossible to fix…), so my conclusion is basically “What’s all the fuzz
about!?”.

Just don’t expect to get rock solid, smooth video in any windowed
environment. (Although, it’s possible to achieve if the blitting is fast
enough.)

As for the “where to place data” and blitting vs. software rendering issues,
that’s obviously the complicated part, and unfortunately, I can’t say that I’m
much more motivated to fix it than most other people, unless it turns out
that 3D accelerators doing 2D stuff can’t result in a decent, good looking and
ultra smooth 2D game engine.

That is, I’m not enough interested in things that require software
rendering, such as video decoders and some special effects. (This may change,
but unfortunately, it doesn’t change the fact that I’m already involved in to
many projects of all kinds, not really getting anything done… :frowning: )

Is it realistic to fix Linux, or are we looking at some very serious design
problems? (I know about the basic issues with X and kernel drivers, but this
bandwidth problem seems to be on a lower level, as it hits svgalib just as
hard.)

I have never seen svgalib being used with anything but memcpy-style
transfers to video memory, which should be considerably slower than
direct transfer even if the MTRR are set to write-combining for the
vidmem area and judicious amount of prefetching hints are used.
I’d like to see a comparison though

Do you mean accelerated blits vs. CPU blits in VRAM, or sysram->VRAM blits
using the CPU vs. DMA? (I’m kind of interested in both, although I’m most
likely going to use 3D acceleration to do VRAM->VRAM blits.) The former
shouldn’t be too hard to try, but the latter would require DMA, obviously…
Is this entirely unsupported by all drivers on Linux?

David Olofson
Programmer
Reologica Instruments AB
david.olofson at reologica.se

…- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
------------> http://www.linuxdj.com/mucos -' | Open Source Advocate | ..- A u d i a l i t y ------------------------. | Singer | | Rock Solid Low Latency Signal Processing | | Songwriter | —> http://www.angelfire.com/or/audiality -’ `-> david at linuxdj.com -’

SDL_CreateRGBSurface decides whether to create a hardware based on
three things:
a) It will only try to create a hardware surface if the screen surface is a
hardware surface itself
b) If the surface is to be used for colorkey blits (SDL_SRCCOLORKEY is passed
as a flag), a hardware surface will only be created if hardware->hardware copy
blits are supported.
c) If the surface is to be used for alpha blits (SDL_SRCALPHA is passed as a
flag), a hardware surface will only be created if hardware->hardware alpha
blits are supported.

Since SDL_LoadBMP, SDL_DisplayFormat(Alpha) and SDL_ConvertSurface all use
SDL_CreateRGBSurface internally, these rules still apply. This is also why its
important to set your colorkeys and alpha properties before calling
SDL_DisplayFormat(Alpha) or SDL_ConvertSurface.On Fri, Nov 03, 2000 at 03:16:20PM -0800, William Kendrick wrote:

How do you do this when loading images from disk, for example? >:^)

SDL_DisplayFormat.

Any special args I need? Or does it just do it, if the display is
set for HWSURFACE?

-bill!


Martin

Bother! said Pooh, as the Kazons discovered hair mousse.

Page-flipping usually synchronizes to vertical refresh, and SDL
supports that.

Does it synchronize only the flip operation, or does it also block until the
flip has been performed? Subtle (?) but fundamental difference. (I’ve browsed
earlier threads on this, and got the impression that SDL doesn’t support the
latter. What did I miss?)

I think most of them block (usually by busy-waiting), since it is the
easiest to implement. Otherwise you need some kind of notification that the
flip has actually taken place so any modification of the new hidden surface
is possible without visible artifacts. If (or when) SDL supports triple-
buffering, this may change.

However, as X11
doesn’t support double buffering (that’s the probem if I understand it
correctly), this is quite irrelevant, unless the blits are faster than the CRT
beam. (Which they should be on decent machines, but…)

X11 supports double buffering perfectly well since it uses a software
backing surface. Don’t confuse double-buffering with hardware page-flipping.

As for the “where to place data” and blitting vs. software rendering issues,
that’s obviously the complicated part, and unfortunately, I can’t say that I’m
much more motivated to fix it than most other people, unless it turns out
that 3D accelerators doing 2D stuff can’t result in a decent, good looking and
ultra smooth 2D game engine.

Alas, it turns out that they can’t. Also, I refuse to require hardware
3d acceleration for a 2d game that is perfectly feasible with just
basic 2d operations. Not only would it be a display of incompetence,
but it would also impose needless limits to my target audience (and I
do not only include 3-year old machines, but also wearables,
handhelds, wireless gadgets, etc).

I have never seen svgalib being used with anything but memcpy-style
transfers to video memory, which should be considerably slower than
direct transfer even if the MTRR are set to write-combining for the
vidmem area and judicious amount of prefetching hints are used.
I’d like to see a comparison though

Do you mean accelerated blits vs. CPU blits in VRAM, or sysram->VRAM blits
using the CPU vs. DMA?

the latter