SDL Graphics Tutorials Run SLOW for Modern PC

Any Nintendo 8bit (1985) game seems to run as well or
better than the seemingly well-enough optomized
graphics tutorials I’ve seen for SDL on my 2GHz
machine.

What’s going on here?

For example, compare

SDL lesson 2 at:
http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/gfxsdl/tut2
using your modern PC

with

Super Mario Brothers or any other ancient game, or any
old PC game you can run on a modern PC which runs so
fast you can’t even play it.

The SDL examples running on a modern PC seem to run
very slow. Why?

Thanks!
Gary__________________________________
Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com

works fine for me (and everyone else) what kinda video card you got?> ----- Original Message -----

From: gbpublicemailaddress@yahoo.com (G B)
To: “sdl list”
Sent: Monday, December 20, 2004 2:09 PM
Subject: [SDL] SDL Graphics Tutorials Run SLOW for Modern PC

Any Nintendo 8bit (1985) game seems to run as well or
better than the seemingly well-enough optomized
graphics tutorials I’ve seen for SDL on my 2GHz
machine.

What’s going on here?

For example, compare

SDL lesson 2 at:
http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/gfxsdl/tut2
using your modern PC

with

Super Mario Brothers or any other ancient game, or any
old PC game you can run on a modern PC which runs so
fast you can’t even play it.

The SDL examples running on a modern PC seem to run
very slow. Why?

Thanks!
Gary


Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

What bitdepth are the screen and the images involved?
Using SDL_DisplayFormat() can significantly increase blitting speed.
I didn’t notice it when I skimmed that tutorial.

-bill!On Mon, Dec 20, 2004 at 02:09:28PM -0800, G B wrote:

Super Mario Brothers or any other ancient game, or any
old PC game you can run on a modern PC which runs so
fast you can’t even play it.

The SDL examples running on a modern PC seem to run
very slow. Why?

What do you mean by fine? Does it take 8 seconds to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m getting
only 640/8, or ~80 frames for this incredibly simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake III
Arena (modern 3D game), for example, like a champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then, what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the speed
it is today. The lesson2 graphic should zoom across
the screen like lightning, like some old games to when
you play them on modern PCs. Is it zooming across the
screen for you? Why not?

Thanks,
Gary

— Alan Wolfe wrote:> works fine for me (and everyone else) what kinda

video card you got?

----- Original Message -----
From: “G B” <@G_B>
To: “sdl list”
Sent: Monday, December 20, 2004 2:09 PM
Subject: [SDL] SDL Graphics Tutorials Run SLOW for
Modern PC

Any Nintendo 8bit (1985) game seems to run as well
or
better than the seemingly well-enough optomized
graphics tutorials I’ve seen for SDL on my 2GHz
machine.

What’s going on here?

For example, compare

SDL lesson 2 at:

http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/gfxsdl/tut2

using your modern PC

with

Super Mario Brothers or any other ancient game, or
any
old PC game you can run on a modern PC which runs
so
fast you can’t even play it.

The SDL examples running on a modern PC seem to
run
very slow. Why?

Thanks!
Gary


Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com

What do you mean by fine? Does it take 8 seconds to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m getting
only 640/8, or ~80 frames for this incredibly simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake III
Arena (modern 3D game), for example, like a champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then, what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the speed
it is today. The lesson2 graphic should zoom across
the screen like lightning, like some old games to when
you play them on modern PCs. Is it zooming across the
screen for you? Why not?

You shouldn’t expect it to go at a million miles per hour, it depends
on the buffer swapping code for your platform. In some cases, a buffer
swap waits for a vertical retrace, so your maximum frames per second
will be about the same as the refresh rate of the monitor. This is
actually a good thing, because you won’t see tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing around 2^10th more data
around between RAM and VRAM with a double or triple buffered context,
at 32bpp, at higher resolutions than old devices. OpenGL doesn’t
really have this problem, because all the hard stuff (hopefully)
happens inside the card, which is fast.

-bobOn Dec 20, 2004, at 5:35 PM, G B wrote:

— Bob Ippolito wrote:

What do you mean by fine? Does it take 8 seconds
to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m
getting
only 640/8, or ~80 frames for this incredibly
simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times
better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake
III
Arena (modern 3D game), for example, like a
champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then,
what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the
speed
it is today. The lesson2 graphic should zoom
across
the screen like lightning, like some old games to
when
you play them on modern PCs. Is it zooming across
the
screen for you? Why not?

You shouldn’t expect it to go at a million miles per
hour, it depends
on the buffer swapping code for your platform. In
some cases, a buffer
swap waits for a vertical retrace, so your maximum
frames per second
will be about the same as the refresh rate of the
monitor. This is
actually a good thing, because you won’t see
tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing around
2^10th more data
around between RAM and VRAM with a double or triple
buffered context,

I can’t write directly to the VRAM with SDL, right?
If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the framebuffer
directly” What is the framebuffer, then? Just some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK that
supposedly don’t access it directly? Does it just
save you having to copy some buffer in main memory to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace gain
of using SDL over QT or GTK?

Also,
Double buffered requirs pushing anywhere near 2^10th
more data? Doesn’t doubling the buffer only account
for needing to push around only 2^1 more data?

at 32bpp, at higher resolutions than old devices.

Changing the resolution for 32 to 8 isn’t affecting
performance for me, only the look.
screen=SDL_SetVideoMode(640,480,32,SDL_HWSURFACE|SDL_DOUBLEBUF);

note, I still need to add SDL_DisplayFormat() to the
example code, which lacks it.

OpenGL doesn’t
really have this problem, because all the hard stuff
(hopefully)
happens inside the card, which is fast.

What about those only 2D games, like Syndicate, that
run super fast on modern PCs (screen scrolls so fast
you can hardly play. How do they get that
performance?

Thanks,
Gary> On Dec 20, 2004, at 5:35 PM, G B wrote:

-bob


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
http://promotions.yahoo.com/new_mail

yeah actualy if you are saying you get about 80fps, 80hz is a common refresh
rate.

you can disable vsync (and get things that go crazy fast like your saying
old nes games that go so fast they are unplayable) but then you also get
tearing and shearing which is generaly no good.> ----- Original Message -----

From: bob@redivi.com (Bob Ippolito)
To: "A list for developers using the SDL library. (includes SDL-announce)"

Sent: Monday, December 20, 2004 2:44 PM
Subject: Re: [SDL] SDL Graphics Tutorials Run SLOW for Modern PC

On Dec 20, 2004, at 5:35 PM, G B wrote:

What do you mean by fine? Does it take 8 seconds to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m getting
only 640/8, or ~80 frames for this incredibly simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake III
Arena (modern 3D game), for example, like a champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then, what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the speed
it is today. The lesson2 graphic should zoom across
the screen like lightning, like some old games to when
you play them on modern PCs. Is it zooming across the
screen for you? Why not?

You shouldn’t expect it to go at a million miles per hour, it depends
on the buffer swapping code for your platform. In some cases, a buffer
swap waits for a vertical retrace, so your maximum frames per second
will be about the same as the refresh rate of the monitor. This is
actually a good thing, because you won’t see tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing around 2^10th more data
around between RAM and VRAM with a double or triple buffered context,
at 32bpp, at higher resolutions than old devices. OpenGL doesn’t
really have this problem, because all the hard stuff (hopefully)
happens inside the card, which is fast.

-bob


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

What do you mean by fine? Does it take 8 seconds to
move the 130x130 bmp across the 640 width screen.

Ever consider that it might have been written to do that?

Being that it scrolls 1 pixel at a time, I’m getting
only 640/8, or ~80 frames for this incredibly simple
animation.

Which is a really nice number if you have vsync turned on and your
display updates at 80 fps.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake III
Arena (modern 3D game), for example, like a champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card.

Why do you believe their is an relationship between the speed of
glxgears and any other program? They were written for different purposes
by different people. Judging one by the other is rather like expecting
oranges to like bananas because you believe gorillas like bananas.

You have jumped to a conclusion and jumped right off the cliff.

	Bob PendletonOn Mon, 2004-12-20 at 16:35, G B wrote:

Also,
maybe 160 frames is ok for 2D stuff, but then, what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the speed
it is today. The lesson2 graphic should zoom across
the screen like lightning, like some old games to when
you play them on modern PCs. Is it zooming across the
screen for you? Why not?

Thanks,
Gary

— Alan Wolfe wrote:

works fine for me (and everyone else) what kinda
video card you got?

----- Original Message -----
From: “G B”
To: “sdl list”
Sent: Monday, December 20, 2004 2:09 PM
Subject: [SDL] SDL Graphics Tutorials Run SLOW for
Modern PC

Any Nintendo 8bit (1985) game seems to run as well
or
better than the seemingly well-enough optomized
graphics tutorials I’ve seen for SDL on my 2GHz
machine.

What’s going on here?

For example, compare

SDL lesson 2 at:

http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/gfxsdl/tut2

using your modern PC

with

Super Mario Brothers or any other ancient game, or
any
old PC game you can run on a modern PC which runs
so
fast you can’t even play it.

The SDL examples running on a modern PC seem to
run
very slow. Why?

Thanks!
Gary


Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

— Bob Pendleton wrote:

What do you mean by fine? Does it take 8 seconds
to
move the 130x130 bmp across the 640 width screen.

Ever consider that it might have been written to do
that?

Please grab the tutorial I’m talking about and look at
its. It is short. It does not explicitly try to do
that as best as I can see.

Being that it scrolls 1 pixel at a time, I’m
getting
only 640/8, or ~80 frames for this incredibly
simple
animation.

Which is a really nice number if you have vsync
turned on and your
display updates at 80 fps.

If that is what is slowing me down. However, on my
laptop, I’m getting ~160 frames and there is tearing,
so its not limited by the vsync there, it seems, yet
it still doesn’t go very fast (only 160)

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times
better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake
III
Arena (modern 3D game), for example, like a
champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card.

Why do you believe their is an relationship between
the speed of
glxgears and any other program?

Perhaps I wasn’t clear. I didn’t mean to suggest
that I thought there was a strong relationship between
the twoI thought was evident by my having mentioned it
as a passing note.

They were written

for different purposes
by different people.

(And different hardware)

Judging one by the other is

rather like expecting
oranges to like bananas because you believe gorillas
like bananas.

You have jumped to a conclusion and jumped right off
the cliff.

I don’t know what conclusion you are referring to. I
was merely asking questions, not stating conclusions.

All I’m trying to understand is how I get SDL to make
really fast animation. By fast I mean like old games
you run that run too fast to play, and you have to
slow it down.

Thanks,
Gary> On Mon, 2004-12-20 at 16:35, G B wrote:

  Bob Pendleton

Also,
maybe 160 frames is ok for 2D stuff, but then,
what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the
speed
it is today. The lesson2 graphic should zoom
across
the screen like lightning, like some old games to
when
you play them on modern PCs. Is it zooming across
the
screen for you? Why not?

Thanks,
Gary

— Alan Wolfe wrote:

works fine for me (and everyone else) what kinda
video card you got?

----- Original Message -----
From: “G B” <@G_B>
To: “sdl list”
Sent: Monday, December 20, 2004 2:09 PM
Subject: [SDL] SDL Graphics Tutorials Run SLOW
for

Modern PC

Any Nintendo 8bit (1985) game seems to run as
well

or

better than the seemingly well-enough
optomized

graphics tutorials I’ve seen for SDL on my
2GHz

machine.

What’s going on here?

For example, compare

SDL lesson 2 at:

http://cone3d.gamedev.net/cgi-bin/index.pl?page=tutorials/gfxsdl/tut2

using your modern PC

with

Super Mario Brothers or any other ancient
game, or

any

old PC game you can run on a modern PC which
runs

so

fast you can’t even play it.

The SDL examples running on a modern PC seem
to

run

very slow. Why?

Thanks!
Gary


Do you Yahoo!?
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!
http://my.yahoo.com

— Bob Ippolito <@Bob_Ippolito> wrote:

What do you mean by fine? Does it take 8 seconds
to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m
getting
only 640/8, or ~80 frames for this incredibly
simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times
better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake
III
Arena (modern 3D game), for example, like a
champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then,
what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the
speed
it is today. The lesson2 graphic should zoom
across
the screen like lightning, like some old games to
when
you play them on modern PCs. Is it zooming across
the
screen for you? Why not?

You shouldn’t expect it to go at a million miles per
hour, it depends
on the buffer swapping code for your platform. In
some cases, a buffer
swap waits for a vertical retrace, so your maximum
frames per second
will be about the same as the refresh rate of the
monitor. This is
actually a good thing, because you won’t see
tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing around
2^10th more data
around between RAM and VRAM with a double or triple
buffered context,

I can’t write directly to the VRAM with SDL, right?
If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the framebuffer
directly” What is the framebuffer, then? Just some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK that
supposedly don’t access it directly? Does it just
save you having to copy some buffer in main memory to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace gain
of using SDL over QT or GTK?

You can’t write directly to VRAM on most modern platforms, period.
It’s faster to swap a whole frame than it is to work with bytes in VRAM
anyway because you can probably use something like DMA to do it.

SDL doesn’t necessarily have any performance gain over Qt or GTK. The
S stands for Simple, not Speedy, after all. In practice, it’s often
more than adequate, but for real complex stuff you need to use
something accelerated (OpenGL, …). SDL often does not take advantage
of much hardware acceleration, though is at least one effort to combine
the two (glSDL).

Also,
Double buffered requirs pushing anywhere near 2^10th
more data? Doesn’t doubling the buffer only account
for needing to push around only 2^1 more data?

I was talking about 2^10 in comparison to the old platforms you were
referencing… which includes the resolution, bits per pixel, double
buffering, etc.

In reality, since these older platforms had hardware accelerated
sprites and SDL typically does not, it’s really more like 2^1000 times
the data that has to go through the pipe.

at 32bpp, at higher resolutions than old devices.

Changing the resolution for 32 to 8 isn’t affecting
performance for me, only the look.
screen=SDL_SetVideoMode(640,480,32,SDL_HWSURFACE|SDL_DOUBLEBUF);

note, I still need to add SDL_DisplayFormat() to the
example code, which lacks it.

If you’re not using a full screen context, SDL probably has to use
32bpp anyway, so 8 bit is just going to slow it down.

OpenGL doesn’t
really have this problem, because all the hard stuff
(hopefully)
happens inside the card, which is fast.

What about those only 2D games, like Syndicate, that
run super fast on modern PCs (screen scrolls so fast
you can hardly play. How do they get that
performance?

Who cares? It’s not performance. If you’re updating the display
faster than the vertical refresh rate, then you are wasting CPU cycles
and making it look worse due to the inevitable tearing.

-bobOn Dec 20, 2004, at 6:20 PM, G B wrote:

On Dec 20, 2004, at 5:35 PM, G B wrote:

— Bob Ippolito wrote:

What do you mean by fine? Does it take 8 seconds
to
move the 130x130 bmp across the 640 width screen.
Being that it scrolls 1 pixel at a time, I’m
getting
only 640/8, or ~80 frames for this incredibly
simple
animation.

I’m running it using an ATI Mach64 3D Rage II (an
older card), but it performs maybe only 2 times
better
(~160 frames) on my Centrino laptop with Intel 832
chipset integrated graphics card that runs Quake
III
Arena (modern 3D game), for example, like a
champion.

Maybe ~160 frames sounds ok, but glxgears runs at,
what 2000+ frames with any decent 3D card. Also,
maybe 160 frames is ok for 2D stuff, but then,
what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the
speed
it is today. The lesson2 graphic should zoom
across
the screen like lightning, like some old games to
when
you play them on modern PCs. Is it zooming across
the
screen for you? Why not?

You shouldn’t expect it to go at a million miles per
hour, it depends
on the buffer swapping code for your platform. In
some cases, a buffer
swap waits for a vertical retrace, so your maximum
frames per second
will be about the same as the refresh rate of the
monitor. This is
actually a good thing, because you won’t see
tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing around
2^10th more data
around between RAM and VRAM with a double or triple
buffered context,

I can’t write directly to the VRAM with SDL, right?

Depends on the hardware, the OS, and the depth of the surface you have
asked for. On many combinations of OS/Hardware SDL can give you direct
access to the hardware. Not that you really want it, but it can.

If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the framebuffer
directly” What is the framebuffer, then?

Since you original statement was wrong, the conclusion is wrong. If you
asked for a hardware surface and got it, then you have a pointer into
some part of the framebuffer.

Just some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK that
supposedly don’t access it directly?

There is really no question here. All of the above, GTK, QT, and SDL
have to work with the same restrictions and can be equally efficient or
inefficient depending on what you want to do. Also, while GTK and QT are
GUI toolkits, SDL is not.

Does it just
save you having to copy some buffer in main memory to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace gain
of using SDL over QT or GTK?

Why would you believe their is one, or could be one? SDL makes some
kinds of programming much easier to do. GTK and QT make other kinds of
programming easy to do. So, from a human time point of view their
efficiency depends on the application you are trying to build. But, from
a hardware point of view, they all work through the same layers and all
face the same problems.

Also,
Double buffered requirs pushing anywhere near 2^10th
more data? Doesn’t doubling the buffer only account
for needing to push around only 2^1 more data?

Double buffering just means you have two (or more) buffers. Nothing
more, nothing less. There is really very little relationship between the
amount of data you have to push around to create a scene and the number
of buffers.

at 32bpp, at higher resolutions than old devices.

Changing the resolution for 32 to 8 isn’t affecting
performance for me, only the look.
screen=SDL_SetVideoMode(640,480,32,SDL_HWSURFACE|SDL_DOUBLEBUF);

note, I still need to add SDL_DisplayFormat() to the
example code, which lacks it.

OpenGL doesn’t
really have this problem, because all the hard stuff
(hopefully)
happens inside the card, which is fast.

What about those only 2D games, like Syndicate, that
run super fast on modern PCs (screen scrolls so fast
you can hardly play. How do they get that
performance?

Don’t know that game, but there are two main ways 2D games get speed. 1)
the old games were written for very slow hardware and don’t take into
account actual time so they just naturally run faster on faster
hardware. 2) The visual speed of an object moving on the screen depends
on how far you move it on the screen from frame to frame, If I move it 1
pixel/frame at 100 fps it seems to move at the same visual speed as if I
were moving it at 10 pixels/frame at 10 fps. Both give a visual motion
of 100 pixels/second. Though the 10 fps animation may look rather
jerky… In well written games there is no relationship between visual
speed and the number of frames that are being drawn each second.

Thanks,
Gary

-bob

Let me reiterate, you are drawing a lot of conclusions from false
premises. I know from experience that doing that will cause great
confusion.

	Bob PendletonOn Mon, 2004-12-20 at 17:20, G B wrote:

On Dec 20, 2004, at 5:35 PM, G B wrote:


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
http://promotions.yahoo.com/new_mail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

— Bob Ippolito wrote:

— Bob Ippolito wrote:

What do you mean by fine? Does it take 8
seconds

to

move the 130x130 bmp across the 640 width
screen.

Being that it scrolls 1 pixel at a time, I’m
getting
only 640/8, or ~80 frames for this incredibly
simple
animation.

I’m running it using an ATI Mach64 3D Rage II
(an

older card), but it performs maybe only 2 times
better
(~160 frames) on my Centrino laptop with Intel
832

chipset integrated graphics card that runs Quake
III
Arena (modern 3D game), for example, like a
champion.

Maybe ~160 frames sounds ok, but glxgears runs
at,

what 2000+ frames with any decent 3D card.
Also,

maybe 160 frames is ok for 2D stuff, but then,
what
visual trick is happening in ancient games that
allowed them to appear to smoothly scroll with
technology that was, what, 1/(2^10) ~1000th the
speed
it is today. The lesson2 graphic should zoom
across
the screen like lightning, like some old games
to

when

you play them on modern PCs. Is it zooming
across

the

screen for you? Why not?

You shouldn’t expect it to go at a million miles
per

hour, it depends
on the buffer swapping code for your platform.
In

some cases, a buffer
swap waits for a vertical retrace, so your
maximum

frames per second
will be about the same as the refresh rate of the
monitor. This is
actually a good thing, because you won’t see
tearing. It’s a horrible
waste of CPU time to do it any other way.

Also note that you’re probably also pushing
around

2^10th more data
around between RAM and VRAM with a double or
triple

buffered context,

I can’t write directly to the VRAM with SDL,
right?
If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the
framebuffer
directly” What is the framebuffer, then? Just
some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK
that
supposedly don’t access it directly? Does it just
save you having to copy some buffer in main memory
to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace
gain
of using SDL over QT or GTK?

You can’t write directly to VRAM on most modern
platforms, period.
It’s faster to swap a whole frame than it is to work
with bytes in VRAM
anyway because you can probably use something like
DMA to do it.

SDL doesn’t necessarily have any performance gain
over Qt or GTK. The
S stands for Simple, not Speedy, after all. In
practice, it’s often
more than adequate, but for real complex stuff you
need to use
something accelerated (OpenGL, …). SDL often does
not take advantage
of much hardware acceleration, though is at least
one effort to combine
the two (glSDL).

But in the case that you can get a HWSURFACE and/or
take advantage of hardware acceleration, can you get
performance gains over QT or GTK for that reason, or
can they get such HW support to, or what?

I’ve read posts mentioning that SDL can “access the
framebuffer directly”. Is this true? What does that
mean?

Also,
Double buffered requirs pushing anywhere near
2^10th
more data? Doesn’t doubling the buffer only
account
for needing to push around only 2^1 more data?

I was talking about 2^10 in comparison to the old
platforms you were
referencing… which includes the resolution, bits
per pixel, double
buffering, etc.

In reality, since these older platforms had hardware
accelerated
sprites and SDL typically does not, it’s really more
like 2^1000 times

Whoa, lets watch our numbers. 2^1000 is bigger than
the number of atoms in the universe ( < ~10^70? ). =)

the data that has to go through the pipe.

at 32bpp, at higher resolutions than old devices.

Changing the resolution for 32 to 8 isn’t
affecting
performance for me, only the look.

screen=SDL_SetVideoMode(640,480,32,SDL_HWSURFACE|SDL_DOUBLEBUF);

note, I still need to add SDL_DisplayFormat() to
the
example code, which lacks it.

If you’re not using a full screen context, SDL
probably has to use
32bpp anyway, so 8 bit is just going to slow it
down.

OpenGL doesn’t
really have this problem, because all the hard
stuff

(hopefully)
happens inside the card, which is fast.

What about those only 2D games, like Syndicate,
that
run super fast on modern PCs (screen scrolls so
fast
you can hardly play. How do they get that
performance?

Who cares? It’s not performance. If you’re
updating the display
faster than the vertical refresh rate, then you are
wasting CPU cycles
and making it look worse due to the inevitable
tearing.

I care because I’m using all my CPU cycles just to get
it to go 80 frames/second, or 30% or my CPU cycles to
get it to go 30 frames/second, which is around, let’s
say, the minimum of where I want it.

Thanks,
Gary> On Dec 20, 2004, at 6:20 PM, G B wrote:

On Dec 20, 2004, at 5:35 PM, G B wrote:

-bob


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
http://promotions.yahoo.com/new_mail

G B wrote:

Please grab the tutorial I’m talking about and look at
its. It is short. It does not explicitly try to do
that as best as I can see.

You’re right, it’s not doing any time-based movement as far as I can see.

If that is what is slowing me down. However, on my
laptop, I’m getting ~160 frames and there is tearing,
so its not limited by the vsync there, it seems, yet
it still doesn’t go very fast (only 160)

It is. I’d be willing to bet that your laptop doesn’t register a
refresh rate to applications, and therefor doesn’t use on, thus SDL
ignores frame-sync, OR perhaps that your laptop, running an LCD screen,
has a refresh rate of ~160 Hz. SDL defaults to vsync when it can,
because anything MORE than that is a waste of time. If you ran your NES
emulator so that it ONLY drew each frame ONCE per physical monitor
refresh, IT TOO would run much slower. Many emulators are written to
process the video and get it out there ASAP, ignoring vsync. SDL does
NOT ignore it unless you tell it to.

SDL (from what I’ve seen) assumes that you want to avoid tearing, since
that makes your game look like crap.

glxgears is ALSO probably written so that it ignores vsync. Some APIs
have it on by default, some have it off.

As I believe was mentioned, MANY older games were written for a specific
hardware set – they KNEW what HW they’d run on, and so they KNEW what
framerate they’d get in their game. Thus, they wrote their game FOR
that hardware system, expecting that framerate. After computers started
getting faster, they discovered how much of a mistake that can be,
because suddenly Wolfenstein3D ran FAR faster than it was designed to.

I SWEAR that what you’re seeing can ONLY be a result of your monitor’s
refresh rate. Please check that (or even change it) and see if there’s
a corresponding number in your framrates. Most of us call 80 fps a
"perfect" framerate, because it is physically impossible to see better,
assuming your monitor runs at 80 Hz.

–Scott

G B writes:

All I’m trying to understand is how I get SDL to make really fast
animation. By fast I mean like old games you run that run too fast
to play, and you have to slow it down.

In short, you don’t. Software based 2D rendering as currently provided
by SDL is quite slow. If you have fast hardware (>2Ghz and a good gfx
card) and run at resolutions like 800x600 you might get similar speed
(60fps or above) as you got on the old consoles, however you can’t go
much faster than that, since the hardware is already at its limit. If
you use lots of transparency effects, a larger resolution or a slower
computer you might not even reach that 60fps. So if you want some fast
action game, full of shiny transparent explosions and other effects
plus a good resolution and framerate the only real answer is really to
not use software based 2d rendering as provided by SDL, but instead
switch to OpenGL. With OpenGL all the hard work, which makes the
software rendering so slow will be done on the graphic card itself,
not the CPU, meaning it will be a lot faster since the graphic card is
specifically designed for that task, not like the CPU. There are of
course also some tricks to get software 2d rendering to be fast
enough, like dirty-rectangles and the like, however those in general
only work when your screen content doesn’t change often, say like in a
round-based strategy title or the like.

So short summary: If you want fast action game on the PC, use OpenGL.
If you want to create a slower paced game SDLs software rendering will
be fine and might also allow your game to run on some handheld devices
which OpenGL won’t allow today. In the future this whole issue made
fade away if 2d-OpenGL rendering gets moved into SDL itself, as it is
currently done by glSDL.

You can also combine OpenGL and SDL rendering and offer both, thats
what SuperTux currently does, you can look there if you want some
example code on how to accomplish that.–
WWW: http://pingus.seul.org/~grumbel/
JabberID: grumbel at jabber.org
ICQ: 59461927

— Scott Harper wrote:

SDL defaults to vsync when it can,
because anything MORE than that is a waste of time. If you ran your NES
emulator so that it ONLY drew each frame ONCE per physical monitor
refresh, IT TOO would run much slower. Many emulators are written to
process the video and get it out there ASAP, ignoring vsync. SDL does
NOT ignore it unless you tell it to.

I’m not sure much of the bottle neck I’m seeing is the vsync. Maybe fps is
being cropped only slighly by it, because when I SDL_Delay for even 5ms, which
should still leave enough time for drawing, I get less fps. It would like to
try it out with ignore vsync, but I’m not sure how to do it. Is there SDL code
to do it, or do I have to change a driver setting? I’m on Fedora Core 2 RageII
card. Maybe changing a driver setting will be easier on my windows machine, if
I do have to make such a change. In this case I’ll try this when I get home.

By the way, I know SDL is usually for 2D games. I’m using it for a custom GUI
for a PIII 733 MHz machine where the application requires smooth scrolling text
over a background image. That is why I initially chose SDL, because I thought
it might give better performance than QT because of the access to the video
frame buffer, and because QT was too big and complicated for my purposes, I
thought.

Thanks,
Gary>

–Scott


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com

Bob,
Thanks for your help this far.

— Bob Pendleton wrote:

If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the framebuffer
directly” What is the framebuffer, then?

Since you original statement was wrong, the conclusion is wrong. If you
asked for a hardware surface and got it, then you have a pointer into
some part of the framebuffer.

I was going on what someone in the mail list just told me - that you usually
don’t have access to VRAM. I should have replied to him by asking “in the
likely case that you don’t have access to VRAM”.

But, either way, the first part of the sentence is not a conclusion, because it
starts with an “if”. Secondly, that latter part is certainly not a conclusion
since I’m legitimately asking 2 questions. I think the questions seemed
rhetorical, but they weren’t. I was confused.

It might have been useful for me if someone answered with something like: “in
the case that you don’t have access to VRAM via SDL for whatever reason, then
it means you won’t get the HWSURFACE and won’t be drawing to the framebuffer -
which is a buffer in VRAM. Otherwise, you might be able to get a HWSURFACE
which means you’ll have access to a piece of the framebuffer in VRAM. However,
that does not gaurantee performance gain over using a software surface without
direct access to the framebuffer” This is my current understanding of the
situation, now having read more documentation, including, I think, your
article. =)

Just some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK that
supposedly don’t access it directly?

There is really no question here. All of the above, GTK, QT, and SDL
have to work with the same restrictions and can be equally efficient or
inefficient depending on what you want to do. Also, while GTK and QT are
GUI toolkits, SDL is not.

Actually it seems I was asking if the framebuffer is just some place in main
memory. My current understanding is that it is not. It is in VRAM, and you
don’t have access to it via SDL in all configurations.

Another question I was asking is how, and implicitly asking if, SDL is performs
better than QT and GTK for animation.

By the way, I’m making a GUI for an application. I was originally thinking of
using QT but thought it was too big and complicated and that perhaps SDL could
perform better because of HW access, so I’m not making a GUI in SDL - which has
been fine, but I’m still open to warnings to turn back for whatever
reasons/experiences people have to offer.

Does it just
save you having to copy some buffer in main memory to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace gain
of using SDL over QT or GTK?

Why would you believe their is one, or could be one?

I should have said, what, if any performance gains are there because of direct
frame buffer access. I thought there might be performance gains because of the
whole direct access to frame buffer thing, but I know there are many other
factors involved.

SDL makes some

kinds of programming much easier to do. GTK and QT make other kinds of
programming easy to do. So, from a human time point of view their
efficiency depends on the application you are trying to build. But, from
a hardware point of view, they all work through the same layers and all
face the same problems.

Wait, they work through the same layers? What do you mean by this? You don’t
mean they can access the framebuffer on VRAM directly, do you?

Also,
Double buffered requirs pushing anywhere near 2^10th
more data? Doesn’t doubling the buffer only account
for needing to push around only 2^1 more data?

Double buffering just means you have two (or more) buffers. Nothing
more, nothing less. There is really very little relationship between the
amount of data you have to push around to create a scene and the number
of buffers.

Ya, I was a little confused that someone said there would be more data to push
around because of this.

Don’t know that game, but there are two main ways 2D games get speed. 1)
the old games were written for very slow hardware and don’t take into
account actual time so they just naturally run faster on faster
hardware. 2) The visual speed of an object moving on the screen depends
on how far you move it on the screen from frame to frame, If I move it 1
pixel/frame at 100 fps it seems to move at the same visual speed as if I
were moving it at 10 pixels/frame at 10 fps. Both give a visual motion
of 100 pixels/second. Though the 10 fps animation may look rather
jerky…

Of course. However, I’ve seen seemingly smooth, visually fast-scrolling,
ancient games on ancient consoles, so I thought that getting the same
smoothness shouldn’t take up 90% of my CPU (70% for X, and 20% for the
application)

In well written games there is no relationship between visual

speed and the number of frames that are being drawn each second.

Isn’t that overstated? If you get too few frames a second, you might not even
see an event happen, or won’t see everything that happened, or at least
possibly have difficulty interpolating where there object should be at the
moment, etc… Besides, my objective is not to communicate the visual speed at
which an object should travel, but to actally make that object travel along
that trajectory in a smooth, easy to follow, nice-looking fashion.

Thanks,
Gary

-bob

Let me reiterate, you are drawing a lot of conclusions from false
premises. I know from experience that doing that will cause great
confusion.

Thanks for sharing the learning from your experiences,
Gary>

  Bob Pendleton

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
http://promotions.yahoo.com/new_mail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250

Isn’t that overstated? If you get too few frames a second, you might not even
see an event happen, or won’t see everything that happened, or at least
possibly have difficulty interpolating where there object should be at the
moment, etc… Besides, my objective is not to communicate the visual speed at
which an object should travel, but to actally make that object travel along
that trajectory in a smooth, easy to follow, nice-looking fashion.

And you can't do it. When running on multitasking systems, either 

Windows, *nix, Mac OSX, every time the OS does something, it eats away
cycles from your game. It’s easier to notice it when the system swaps to
the disk, for instance. You won’t be able to run smooth on these
situations, no matter what you do, you can’t control the programs the
end user runs in the background.

The best you can do is to keep track of time to keep things smooth and 

at constant speed so that, in the event something happens, you slow down
the game to avoid losing precision. This is the most you can do. Unless
you write games “on the metal” for DOS, which is not feasible nowadays.

Best regards,

Paulo V W Radtke

http://blog.chienloco.com
http://www.chienloco.com

G B,
AFAIK having access to the frame buffer (VRAM) only occurs when your program
is run full-screen. You’ll never(someone correct me if I’m wrong, but
really, I think it is this way) get a windowed hardware surface. If you’re
programming an “application” , which implies, at least to me, that you’ll be
running in windowed-mode, you won’t be able to access VRAM directly…
OTOH, if you don’t mind using the full-screen setting, you can in many
instances(not all, though it’s acutally never happened to me…) get a
tremendous speed increase in your program using hardware surfaces(without
double-buffering, that is). Differences like going from 100 fps to more than
1000 (in some rare cases I’ve acutally hit in excess of 5000 fps, but only
when blitting very basic scenes.) The drawbacks to this increase in speed
are not insignificant. If you want any sort of fancy alpha-blending (usually
people dealing with text, like you mentioned, want this feature) it slows
down incredibly fast(dropping in most cases below the performance of
plain-old software surfaces). One must also deal with a different blitting
mechanism in order to avoid excess tearing, etc. already mentioned, also you
have to deal with your own cursor, losing surfaces when switching
resolutions, the list goes on. Basically, it’s a huge headache, and after
many experiments on my part with it, I decided to just stick with the
regular software surfaces. They are much easier.
One other thing to remember, in comparing the NES to computers now, the
output resolution of NES(and even modern gaming equipment,video games I
mean) is much lower(typically 320x240, 400x300 at most) since they are
targeted for a television. If you make your SDL window that small, your
speed increases quite a bit, but it’s really too small for a computer
screen… Anyway, I’ve blabbed enough. I’ve enjoyed reading these lists, but
I don’t think I’ve ever posted before… Any corrections/contradictions to
the above are welcome!
-jolynsbass

— Scott Harper wrote:

SDL defaults to vsync when it can,
because anything MORE than that is a waste of time. If you ran your NES
emulator so that it ONLY drew each frame ONCE per physical monitor
refresh, IT TOO would run much slower. Many emulators are written to
process the video and get it out there ASAP, ignoring vsync. SDL does
NOT ignore it unless you tell it to.

I’m not sure much of the bottle neck I’m seeing is the vsync. Maybe fps is
being cropped only slighly by it, because when I SDL_Delay for even 5ms, which
should still leave enough time for drawing, I get less fps. It would like to
try it out with ignore vsync, but I’m not sure how to do it. Is there SDL code
to do it, or do I have to change a driver setting? I’m on Fedora Core 2 RageII
card. Maybe changing a driver setting will be easier on my windows machine, if
I do have to make such a change. In this case I’ll try this when I get home.

By the way, I know SDL is usually for 2D games.

No, it isn’t. Nothing in SDL makes it “usually for 2D games”. It works
just fine for 3D games.

I’m using it for a custom GUI
for a PIII 733 MHz machine where the application requires smooth scrolling text
over a background image.

One of my students did a game typing game that had smoothly scrolling
text over an image for his term project. Worked great, looked great. Did
it in SDL with OpenGL. Even though it was a pure 2D game, he used OpenGL
to get the best hardware acceleration available on the machine.

That is why I initially chose SDL, because I thought
it might give better performance than QT because of the access to the video
frame buffer,

Direct access to the frame buffer is often the slowest way to do
graphics on modern computers.

	Bob PendletonOn Mon, 2004-12-20 at 21:25, G B wrote:

and because QT was too big and complicated for my purposes, I
thought.

Thanks,
Gary

–Scott


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
The all-new My Yahoo! - Get yours free!
http://my.yahoo.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+

Bob,
Thanks for your help this far.

— Bob Pendleton <@Bob_Pendleton> wrote:

If I can’t access the VRAM directly (which is the
hardware), what does it mean that SDL gives you a
HWSURFACE that allows you to “access the framebuffer
directly” What is the framebuffer, then?

Since you original statement was wrong, the conclusion is wrong. If you
asked for a hardware surface and got it, then you have a pointer into
some part of the framebuffer.

I was going on what someone in the mail list just told me - that you usually
don’t have access to VRAM. I should have replied to him by asking “in the
likely case that you don’t have access to VRAM”.

But, either way, the first part of the sentence is not a conclusion, because it
starts with an “if”. Secondly, that latter part is certainly not a conclusion
since I’m legitimately asking 2 questions. I think the questions seemed
rhetorical, but they weren’t. I was confused.

It might have been useful for me if someone answered with something like: “in
the case that you don’t have access to VRAM via SDL for whatever reason, then
it means you won’t get the HWSURFACE and won’t be drawing to the framebuffer -
which is a buffer in VRAM. Otherwise, you might be able to get a HWSURFACE
which means you’ll have access to a piece of the framebuffer in VRAM. However,
that does not gaurantee performance gain over using a software surface without
direct access to the framebuffer” This is my current understanding of the
situation, now having read more documentation, including, I think, your
article. =)

Just some
ordinary place in main memory that that is the
specified location that the CPU ships to the VRAM.
How is it any more efficient that using QT or GTK that
supposedly don’t access it directly?

There is really no question here. All of the above, GTK, QT, and SDL
have to work with the same restrictions and can be equally efficient or
inefficient depending on what you want to do. Also, while GTK and QT are
GUI toolkits, SDL is not.

Actually it seems I was asking if the framebuffer is just some place in main
memory. My current understanding is that it is not. It is in VRAM, and you
don’t have access to it via SDL in all configurations.

Another question I was asking is how, and implicitly asking if, SDL is performs
better than QT and GTK for animation.

By the way, I’m making a GUI for an application. I was originally thinking of
using QT but thought it was too big and complicated and that perhaps SDL could
perform better because of HW access, so I’m not making a GUI in SDL - which has
been fine, but I’m still open to warnings to turn back for whatever
reasons/experiences people have to offer.

Does it just
save you having to copy some buffer in main memory to
the specified area for the “framebuffer”, which is
also just in main memory? What’s the performace gain
of using SDL over QT or GTK?

Why would you believe their is one, or could be one?

I should have said, what, if any performance gains are there because of direct
frame buffer access. I thought there might be performance gains because of the
whole direct access to frame buffer thing, but I know there are many other
factors involved.

SDL makes some

kinds of programming much easier to do. GTK and QT make other kinds of
programming easy to do. So, from a human time point of view their
efficiency depends on the application you are trying to build. But, from
a hardware point of view, they all work through the same layers and all
face the same problems.

Wait, they work through the same layers? What do you mean by this? You don’t
mean they can access the framebuffer on VRAM directly, do you?

Also,
Double buffered requirs pushing anywhere near 2^10th
more data? Doesn’t doubling the buffer only account
for needing to push around only 2^1 more data?

Double buffering just means you have two (or more) buffers. Nothing
more, nothing less. There is really very little relationship between the
amount of data you have to push around to create a scene and the number
of buffers.

Ya, I was a little confused that someone said there would be more data to push
around because of this.

Don’t know that game, but there are two main ways 2D games get speed. 1)
the old games were written for very slow hardware and don’t take into
account actual time so they just naturally run faster on faster
hardware. 2) The visual speed of an object moving on the screen depends
on how far you move it on the screen from frame to frame, If I move it 1
pixel/frame at 100 fps it seems to move at the same visual speed as if I
were moving it at 10 pixels/frame at 10 fps. Both give a visual motion
of 100 pixels/second. Though the 10 fps animation may look rather
jerky…

Of course. However, I’ve seen seemingly smooth, visually fast-scrolling,
ancient games on ancient consoles, so I thought that getting the same
smoothness shouldn’t take up 90% of my CPU (70% for X, and 20% for the
application)

And I was trying to point out that that is a really bad assumption. But,
it does illustrate the difference between using hardware acceleration
and not using it. Using hardware acceleration it is possible to get very
smooth animation using only a few percent of the CPU.

In well written games there is no relationship between visual

speed and the number of frames that are being drawn each second.

Isn’t that overstated? If you get too few frames a second, you might not even
see an event happen, or won’t see everything that happened, or at least
possibly have difficulty interpolating where there object should be at the
moment, etc…

Two different problems here. One is the relationship between fps and
perception; if the frame rate is too low you lose the feeling of reality
and, indeed, you can miss events that take place between frames. The
other is the problem of moving something at a given visual speed. That
really is independent of the frame rate. It might look like crap at a
low fps, but you can make an object move at a given visual velocity
completely independent of the frame rate.

There are situations where you have to deal with a highly variable frame
rate. Sometimes the OS just stops you for a while, sometimes it just
takes longer to draw a frame.

Besides, my objective is not to communicate the visual speed at
which an object should travel, but to actally make that object travel along
that trajectory in a smooth, easy to follow, nice-looking fashion.

Same thing. To make the object travel along a trajectory in a smooth,
easy to follow, nice-looking fashion is exactly communicating the
objects visual velocity.

	Bob PendletonOn Mon, 2004-12-20 at 22:17, G B wrote:

Thanks,
Gary

-bob

Let me reiterate, you are drawing a lot of conclusions from false
premises. I know from experience that doing that will cause great
confusion.

Thanks for sharing the learning from your experiences,
Gary

  Bob Pendleton

SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Helps protect you from nasty viruses.
http://promotions.yahoo.com/new_mail


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


Do you Yahoo!?
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-------------------------------------+