Optimal mode from SDL_GetVideoMode

Hi, I’ve recently tried to find the optimal mode from SDL_GetVideoMode.
The result is all the accelerations(hw_available, blit_hw, blit_hw_CC,
blit_hw_A, blit_sw, blit_sw_CC, blit_sw_A, blit_fill) are valued 0. So, is
it better to use the flag SDL_HWSURFACE or SDL_SWSURFACE? What is the
optimal flag if I wanted to use either colorkey or alpha blending? And why
is the video_mem value always returned zero?

Hello benang,

Thursday, October 5, 2006, 5:16:57 AM, you wrote:

Hi, I’ve recently tried to find the optimal mode from SDL_GetVideoMode.
The result is all the accelerations(hw_available, blit_hw, blit_hw_CC,
blit_hw_A, blit_sw, blit_sw_CC, blit_sw_A, blit_fill) are valued 0. So, is
it better to use the flag SDL_HWSURFACE or SDL_SWSURFACE? What is the
optimal flag if I wanted to use either colorkey or alpha blending? And why
is the video_mem value always returned zero?

On what platform?–
Best regards,
Peter mailto:@Peter_Mulholland

The best mode really depends on the driver you are using. Under X11
for example you only have software surface.On 10/5/06, Peter Mulholland wrote:

Hello benang,

Thursday, October 5, 2006, 5:16:57 AM, you wrote:

Hi, I’ve recently tried to find the optimal mode from SDL_GetVideoMode.
The result is all the accelerations(hw_available, blit_hw, blit_hw_CC,
blit_hw_A, blit_sw, blit_sw_CC, blit_sw_A, blit_fill) are valued 0. So, is
it better to use the flag SDL_HWSURFACE or SDL_SWSURFACE? What is the
optimal flag if I wanted to use either colorkey or alpha blending? And why
is the video_mem value always returned zero?

On what platform?


Best regards,
Peter mailto:darkmatter at freeuk.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hello benang,

Thursday, October 5, 2006, 5:16:57 AM, you wrote:

Hi, I’ve recently tried to find the optimal mode from SDL_GetVideoMode.
The result is all the accelerations(hw_available, blit_hw, blit_hw_CC,
blit_hw_A, blit_sw, blit_sw_CC, blit_sw_A, blit_fill) are valued 0. So,
is
it better to use the flag SDL_HWSURFACE or SDL_SWSURFACE? What is the
optimal flag if I wanted to use either colorkey or alpha blending? And
why
is the video_mem value always returned zero?

On what platform?


Best regards,
Peter mailto:darkmatter at freeuk.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

On Linux SuSE 10.0.

The best mode really depends on the driver you are using. Under X11
for example you only have software surface.

Hello benang,

Thursday, October 5, 2006, 5:16:57 AM, you wrote:

Hi, I’ve recently tried to find the optimal mode from
SDL_GetVideoMode.
The result is all the accelerations(hw_available, blit_hw, blit_hw_CC,
blit_hw_A, blit_sw, blit_sw_CC, blit_sw_A, blit_fill) are valued 0.
So, is
it better to use the flag SDL_HWSURFACE or SDL_SWSURFACE? What is the
optimal flag if I wanted to use either colorkey or alpha blending? And
why
is the video_mem value always returned zero?

On what platform?


Best regards,
Peter mailto:darkmatter at freeuk.com


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Thanks. Maybe that’s why my app runs slowly (I used hardware surface all
along).> On 10/5/06, Peter Mulholland wrote:

Actually SDL fallbacks to a software surface if you request an
hardware surface and the current backend does not support it, so in
your case it shouldn’t make any difference performance-wise.

Keep in mind that plain X11 is always software rendered, and therefore
not terribly fast (but fast enough for most uses). To remedy this, you
must either 1) use OpenGL instead of the 2D API, or 2) wait for SDL
1.3, which will provide an OpenGL backend for the 2D API.On 10/5/06, benang at cs.its.ac.id wrote:

Thanks. Maybe that’s why my app runs slowly (I used hardware surface all
along).

  • SR

Keep in mind that plain X11 is always software rendered, and therefore
not terribly fast (but fast enough for most uses). To remedy this, you
must either 1) use OpenGL instead of the 2D API, or 2) wait for SDL
1.3, which will provide an OpenGL backend for the 2D API.

Yeah, the X11 target returning 0 for available video memory is not a
bug; there are no hardware surfaces.

While it’s not a fast path, it should not kill your performance,
generally…I have yet to see a 2D game that couldn’t manage with
software surfaces, even when doing a fair bit of alpha blending. Usually
the bottleneck is converting data formats during blits, so be sure to
preconvert them! Otherwise, never call SDL_UpdateRect or SDL_Flip until
forced to.

Or use OpenGL, of course.

–ryan.

So it doesn’t matter either way? Well, I’m currently developping in a
Pentium 4. But the intended machine for my game is an embedded system with
a clock of 400 MHz. And it’s A LOT slower than in my PC. Maybe it’s
because I used 1024X768 with 32 bpp mode. Is it possible for me to add the
performance while retaining my curent video mode?> On 10/5/06, @benang_at_cs.its.ac <@benang_at_cs.its.ac> wrote:

Thanks. Maybe that’s why my app runs slowly (I used hardware surface all
along).

Actually SDL fallbacks to a software surface if you request an
hardware surface and the current backend does not support it, so in
your case it shouldn’t make any difference performance-wise.

Keep in mind that plain X11 is always software rendered, and therefore
not terribly fast (but fast enough for most uses). To remedy this, you
must either 1) use OpenGL instead of the 2D API, or 2) wait for SDL
1.3, which will provide an OpenGL backend for the 2D API.

  • SR

Keep in mind that plain X11 is always software rendered, and therefore
not terribly fast (but fast enough for most uses). To remedy this, you
must either 1) use OpenGL instead of the 2D API, or 2) wait for SDL
1.3, which will provide an OpenGL backend for the 2D API.

Yeah, the X11 target returning 0 for available video memory is not a
bug; there are no hardware surfaces.

While it’s not a fast path, it should not kill your performance,
generally…I have yet to see a 2D game that couldn’t manage with
software surfaces, even when doing a fair bit of alpha blending. Usually
the bottleneck is converting data formats during blits, so be sure to
preconvert them! Otherwise, never call SDL_UpdateRect or SDL_Flip until
forced to.

Or use OpenGL, of course.

–ryan.

I get it. And here I thought there’s an error in my graphics card. Well, I
didn’t store the image in RWops type. I converted them when I read all the
bitmap files to Surfaces so it should pose a problem. Also I only call
SDL_Flip once in every loop. Or maybe I should’ve add some algorithm to
check whether the surface is updated (so I don’t have to Flip the screen)
or not to improve the performance?

So it doesn’t matter either way? Well, I’m currently developping in a
Pentium 4. But the intended machine for my game is an embedded system with
a clock of 400 MHz. And it’s A LOT slower than in my PC. Maybe it’s
because I used 1024X768 with 32 bpp mode. Is it possible for me to add the
performance while retaining my curent video mode?

Well, okay, embedded stuff is a different story. :slight_smile:

(The rest assumes that OpenGL is out of the question, and there isn’t a
3D accelerator in the system at all…)

Are you really going to run an X server on that? If it’s an embedded
device, you could probably save some resources and overhead by using SDL
with the fbcon target. The system may be dramatically different, but
your SDL-based code shouldn’t have to change at all.

32-bit graphics means a lot of memory bandwidth on an embedded system;
if you can get by with 16-bit color, that’ll speed things up quite a
bit. If you can turn off the alpha blending, that’ll save both memory
bandwidth and CPU time for blits. This is really expensive, and in many
cases a colorkey blit will be much faster if that’s all you actually need.

If you convert the surfaces to the display format, non-alpha blits
basically become memcpy() calls instead of complicated efforts to
convert them to the correct format pixel-by-pixel every time. If you can
isolate parts of the screen that need updating every frame (say, an
object that moved and its previous position need redrawing instead of
the whole screen), then you should try using a "dirty rectangle"
approach with SDL_UpdateRects() instead of SDL_Flip(), which reduces the
amount of blitting to do in the first place.

Generally speaking, those are the biggest optimization wins with 2D
graphics.

–ryan.

So it doesn’t matter either way? Well, I’m currently developping in a
Pentium 4. But the intended machine for my game is an embedded system
with
a clock of 400 MHz. And it’s A LOT slower than in my PC. Maybe it’s
because I used 1024X768 with 32 bpp mode. Is it possible for me to add
the
performance while retaining my curent video mode?

Well, okay, embedded stuff is a different story. :slight_smile:

(The rest assumes that OpenGL is out of the question, and there isn’t a
3D accelerator in the system at all…)

Are you really going to run an X server on that? If it’s an embedded
device, you could probably save some resources and overhead by using SDL
with the fbcon target. The system may be dramatically different, but
your SDL-based code shouldn’t have to change at all.

32-bit graphics means a lot of memory bandwidth on an embedded system;
if you can get by with 16-bit color, that’ll speed things up quite a
bit. If you can turn off the alpha blending, that’ll save both memory
bandwidth and CPU time for blits. This is really expensive, and in many
cases a colorkey blit will be much faster if that’s all you actually need.

If you convert the surfaces to the display format, non-alpha blits
basically become memcpy() calls instead of complicated efforts to
convert them to the correct format pixel-by-pixel every time. If you can
isolate parts of the screen that need updating every frame (say, an
object that moved and its previous position need redrawing instead of
the whole screen), then you should try using a "dirty rectangle"
approach with SDL_UpdateRects() instead of SDL_Flip(), which reduces the
amount of blitting to do in the first place.

Generally speaking, those are the biggest optimization wins with 2D
graphics.

–ryan.

Thanks for the feedback. It’s very enlightening. Just as I thought, using
X will make the game runs slower. The truth is, I’m just getting to know
Linux only recently. Well, I’ve used Linux before in practicums in my
college days few years ago (although only the basics). So I’m pretty much
a not-so-newbie-but-not-so-seasoned Linux user. I’ve tried running my
application in pure console (without using Win Manager at all) and it
returned some error. So at that time I assumed SDL can’t be used in
console. Can you point me to some resources to make SDL in fbcon?

Maybe I’ll use the “dirty rectangle” first, the only one I’m capable of
doing right now.

Thanks.

benang at cs.its.ac.id wrote:

I get it. And here I thought there’s an error in my graphics card.
Well, I didn’t store the image in RWops type. I converted them when I
read all the bitmap files to Surfaces so it should pose a problem.
Also I only call SDL_Flip once in every loop. Or maybe I should’ve
add some algorithm to check whether the surface is updated (so I
don’t have to Flip the screen) or not to improve the performance?

not only that. but you should use UpdateRect() instead of Flip if not
all of your srenn contents has changed.

clemens

Thanks for the feedback. It’s very enlightening. Just as I thought, using
X will make the game runs slower.

Not necessarily: if there’s an X driver for the video chip (verses the
fbcon “vesa” driver, for example) it may be quite a bit faster…but on
an embedded system it’ll take a lot more memory to run an X server.
There may be a little more overhead involved in X11, but I wouldn’t
imagine it to be significant beyond memory requirements. It’s all a
matter of what you have and what you need, so I wanted to throw the idea
out there.

Generally using fbcon requires the kernel to support it, and SDL to be
built with support for it. Do this last, since all the other details of
your program that you can do in X will apply to it when using fbcon. In
fact, you shouldn’t even need to recompile the program to switch video
targets.

The biggest optimization win will probably be a dirty rectangle
algorithm, if you don’t have to paint the whole screen every frame. The
easiest win is to preconvert all your surfaces to the screen format with
SDL_DisplayFormat() and disabling alpha blending in favor of colorkey
blits, if you aren’t already.

–ryan.

Thanks for the feedback. It’s very enlightening. Just as I thought,
using
X will make the game runs slower.

Not necessarily: if there’s an X driver for the video chip (verses the
fbcon “vesa” driver, for example) it may be quite a bit faster…but on
an embedded system it’ll take a lot more memory to run an X server.
There may be a little more overhead involved in X11, but I wouldn’t
imagine it to be significant beyond memory requirements. It’s all a
matter of what you have and what you need, so I wanted to throw the idea
out there.

Generally using fbcon requires the kernel to support it, and SDL to be
built with support for it. Do this last, since all the other details of
your program that you can do in X will apply to it when using fbcon. In
fact, you shouldn’t even need to recompile the program to switch video
targets.

The biggest optimization win will probably be a dirty rectangle
algorithm, if you don’t have to paint the whole screen every frame. The
easiest win is to preconvert all your surfaces to the screen format with
SDL_DisplayFormat() and disabling alpha blending in favor of colorkey
blits, if you aren’t already.

–ryan.

Okay. Will do that for now. Maybe I should’ve examined Sam’s alien example
back then. :frowning:

benang at cs.its.ac.id wrote:

Thanks for the feedback. It’s very enlightening. Just as I thought, using
X will make the game runs slower.
This may or may not be the case. If X is the only way for you to get
your card to do DMA, then it might very well be faster, even if it eats
up a little more memory.

The truth is, I’m just getting to know
Linux only recently. Well, I’ve used Linux before in practicums in my
college days few years ago (although only the basics). So I’m pretty much
a not-so-newbie-but-not-so-seasoned Linux user. I’ve tried running my
application in pure console (without using Win Manager at all) and it
returned some error. So at that time I assumed SDL can’t be used in
console. Can you point me to some resources to make SDL in fbcon?

What’s your target graphics hardware ? This matters a lot in your case.
SDL will only accelerate blits on fbcon with some video cards.

Stephane

@benang_at_cs.its.ac wrote:

Thanks for the feedback. It’s very enlightening. Just as I thought,
using
X will make the game runs slower.
This may or may not be the case. If X is the only way for you to get
your card to do DMA, then it might very well be faster, even if it eats
up a little more memory.

The truth is, I’m just getting to know
Linux only recently. Well, I’ve used Linux before in practicums in my
college days few years ago (although only the basics). So I’m pretty
much
a not-so-newbie-but-not-so-seasoned Linux user. I’ve tried running my
application in pure console (without using Win Manager at all) and it
returned some error. So at that time I assumed SDL can’t be used in
console. Can you point me to some resources to make SDL in fbcon?

What’s your target graphics hardware ? This matters a lot in your case.
SDL will only accelerate blits on fbcon with some video cards.

Stephane

Well, my machine’s graphics is an “Integrated VIA UniChrome AGP Graphics
with MPEG-2 Accelerator”. I guess it’s not a familiar one because it comes
with the embedded system.

benang at cs.its.ac.id wrote:

benang at cs.its.ac.id wrote:

Thanks for the feedback. It’s very enlightening. Just as I thought,
using
X will make the game runs slower.

This may or may not be the case. If X is the only way for you to get
your card to do DMA, then it might very well be faster, even if it eats
up a little more memory.

The truth is, I’m just getting to know
Linux only recently. Well, I’ve used Linux before in practicums in my
college days few years ago (although only the basics). So I’m pretty
much
a not-so-newbie-but-not-so-seasoned Linux user. I’ve tried running my
application in pure console (without using Win Manager at all) and it
returned some error. So at that time I assumed SDL can’t be used in
console. Can you point me to some resources to make SDL in fbcon?

What’s your target graphics hardware ? This matters a lot in your case.
SDL will only accelerate blits on fbcon with some video cards.

Stephane

Well, my machine’s graphics is an “Integrated VIA UniChrome AGP Graphics
with MPEG-2 Accelerator”. I guess it’s not a familiar one because it comes
with the embedded system.

Well, the X driver for the unichrome chips will accelerate DMA. On the
other hand, I don’t think there is even a frame buffer driver for that
chip.
So in my view, if you have enough RAM, X is pretty much the way to go.
Hey, you might even want to use opengl on that chip.

Stephane