A few questions

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040927/cb684388/attachment.asc

Okay, I’m currently trying to maintain the best game framerate I can so
that after limiting it so the game can run as leanly as possible. The
game
is a 2d game. Right now I can get about 140+ with 16 bits per pixel
but
this drops to 40-80 with 32 bits per pixel. I thought it wouldn’t be
an
issue (just set the game depth to 16 bits per pixel and work within
those
art parameters)

Well when blitting a 32 bit image vs a 16 bit image, you are copying
twice as much data. Also, if you’re getting 140+ FPS, you might want to
consider v-sync; you’re probably getting a lot of tearing, which
may not show up initially, but can sometimes start showing up later.

but in running it through a few testers, it showed up that
it also needed the background screen depth to be 16 bits per pixels
(All
tests on windows machines of various types, but every one that had it
set
to 32 bits per pixel in the display console had this problem, which
vanished when setting that to 16).

When you say “background screen depth,” do you mean the color depth
setting in the Display control panel?

So my first question is there any ways
to fix this? What ways would the list suggest? Theoretically there
must
be a way to ask for the whole context to switch to that but that seems
a
rough way to achieve that.

Most 2D games would be very satisfied with 40-80 FPS. Is there a reason
you are not? Does your game employ framerate-fixed logic?

Next, on a different project I’ve hit a point where it’d be beneficial
to
be able to either render multiple windows or sub windows. I know this
is
doable in openGL (having done it through glut before) but I don’t know
what
is possible on this front in SDL and don’t see anything in the
tutorials
that really seem to handle window management.

SDL is unfortunately geared towards games, which are usually best
suited for one window. Unless it’s necessary to see your other
applications’ windows “between” your SDL application’s windows, perhaps
you could simply create a bigger window and render two separate
rectangles with the images you’d like to render in separate windows.

In other words, you could try to paint your own windows.

As a final random question, is there anything in SDL that maps in
functionality to how GLUT’s menufunc works? It’d be nice be able to
easily
produce those types of key bound menu screens but again, I don’t see
anything that really seems to do it (could implement it myself, but
don’t
want to reinvent the wheel if there’s already a way to do it).

I’m not sure exactly what GLUT’s menufunc is. However if you want to
create in-game menus there are quite a few libraries out there for
creating GUIs in both SDL and OpenGL.On Sep 27, 2004, at 2:05 PM, bloomp at rpi.edu wrote:

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040927/76893148/attachment.txt

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040927/0c6d5f61/attachment.asc

bloomp at rpi.edu wrote:

Okay, I’m currently trying to maintain the best game framerate I can so
that after limiting it so the game can run as leanly as possible. The game
is a 2d game. Right now I can get about 140+ with 16 bits per pixel but
this drops to 40-80 with 32 bits per pixel. I thought it wouldn’t be an
issue (just set the game depth to 16 bits per pixel and work within those
art parameters) but in running it through a few testers, it showed up that
it also needed the background screen depth to be 16 bits per pixels (All
tests on windows machines of various types, but every one that had it set
to 32 bits per pixel in the display console had this problem, which
vanished when setting that to 16). So my first question is there any ways
to fix this? What ways would the list suggest?

I’m not sure what you’re asking exactly, but if this is just a
performance problem : did you try to use SDL_DisplayFormat on the
surfaces that will subsequently be blitted to screen ?

Stephane

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040927/b9b423f1/attachment.txt

bloomp at rpi.edu wrote:

issue (just set the game depth to 16 bits per pixel and work within

those

art parameters) but in running it through a few testers, it showed up

that

it also needed the background screen depth to be 16 bits per pixels (All

tests on windows machines of various types, but every one that had it

set

to 32 bits per pixel in the display console had this problem, which
vanished when setting that to 16). So my first question is there any

ways

to fix this? What ways would the list suggest?

I’m not sure what you’re asking exactly, but if this is just a
performance problem : did you try to use SDL_DisplayFormat on the
surfaces that will subsequently be blitted to screen ?

I am using that. It’s a minor performance problem anyway though.
Basically, regardless of how the surfaces are set up, if the user has his
display properties (in Windows anyway) set to 32 bits, it’ll act just the
same way as if all surfaces were being handled as 32 bit surfaces and thus
take the longer time to blit. I’d like to be able to guarentee to a
reasonable degree that it is indeed working with 16 bit pixel depth (which
demonstratably gets about 2-4x the performance on average).

Ok, I see.

Do you allow the creation of a shadow surface or not ? i.e. do you use
the SDL_ANYFORMAT flag or a bpp of 0 in SDL_SetVideoMode ?

Stephane

An embedded and charset-unspecified text was scrubbed…
Name: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20040928/35ba303c/attachment.asc

bloomp at rpi.edu wrote:

I am using that. It’s a minor performance problem anyway though.
Basically, regardless of how the surfaces are set up, if the user has

his

display properties (in Windows anyway) set to 32 bits, it’ll act just

the

same way as if all surfaces were being handled as 32 bit surfaces and

thus

take the longer time to blit. I’d like to be able to guarentee to a
reasonable degree that it is indeed working with 16 bit pixel depth

(which

demonstratably gets about 2-4x the performance on average).

Ok, I see.

Do you allow the creation of a shadow surface or not ? i.e. do you use
the SDL_ANYFORMAT flag or a bpp of 0 in SDL_SetVideoMode ?

Stephane

Nope. Current flags used:
//INITIALIZE GRAPHICS LIBRARY!
int videoflags=0; //Determines the video flags used. Class
variable for screen size changes.

// if we can’t start up the video process spit out an error and
exit
if (-1 == SDL_InitSubSystem(SDL_INIT_VIDEO))
{
clog::get().write(LOG_CLIENT, IDS_GENERIC_SUB_INIT_FAIL,
“Video”, SDL_GetError());
failed=true;
}

//make sure accidental exits close SDL.
atexit(SDL_Quit);

// set icon first
SDL_WM_SetIcon(SDL_LoadBMP(“Media\icon.bmp”), NULL);

SDL_ShowCursor(SDL_DISABLE); // hide cursor

//setup flags
videoflags |= SDL_HWPALETTE; // Use a HW palette
videoflags |= SDL_DOUBLEBUF; //Double buffering is good

//Scan the graphics card avaliable
const SDL_VideoInfo * videoinfo = SDL_GetVideoInfo();

//Return an error message if no information found
if(videoinfo==NULL)
failed=true;

//Try to get a hardware surface
if(videoinfo -> hw_available) // is it a hardware
surface
videoflags |= SDL_HWSURFACE;
else
videoflags |= SDL_SWSURFACE;

//If the process can be accelerated, do it.
if(videoinfo -> blit_hw) // is hardware
blitting available
videoflags |= SDL_HWACCEL;

SDL_HWACCEL is not meant to be used in SDL_SetVideoMode. It does nothing
there.

//Setup the display screen, 800x600
SDL_Surface* screen = SDL_SetVideoMode(800, 600, 16, videoflags);

By calling SDL_SetVideoMode that way, you are forcing the creation of a
16bpp video surface. If you force the creation of a 16bpp surface while
the underlying screen is 32bpp, SDL will create a so-called “shadow
surface”. This surface is a surface between your application and the
video memory, and it has the pixel properties you requested (here, it is
a 16bpp surface). So what will happen from there ? SDL_SetVideoMode will
return the shadow surface to your application, and your application will
draw to it. Then, during SDL_Flip/UpdateRect, this shadow surface will
be copied (and converted at the same time) to 32 bpp. That can be slow,
because this prevents hardware accelerated blits and does an additional
copy.

What can you do about that ? Let SDL create a 32bpp video surface (by
using the SDL_ANYFORMAT flag for example). Then SDL_DisplayFormat all
your resources.

Stephae

If 16bpp is what you are designing for then you should try and get a
non-emulated 16bpp mode if possible. However, don’t assume that a 16bpp
mode will be available all always convert all loaded surfaces with
SDL_DisplayFormat. The determine the best video mode to set use
SDL_GetVideoMode or SDL_VideoModeOK. Sometimes shadow surfaces might be
suitable, but for most applications using a video mode which doesn’t
require a shadow surface and converting all surfaces with
SDL_DisplayFormat will be your best solution.On Mon, Sep 27, 2004 at 02:05:40PM -0400, bloomp at rpi.edu wrote:

Okay, I’m currently trying to maintain the best game framerate I can so
that after limiting it so the game can run as leanly as possible. The game
is a 2d game. Right now I can get about 140+ with 16 bits per pixel but
this drops to 40-80 with 32 bits per pixel. I thought it wouldn’t be an
issue (just set the game depth to 16 bits per pixel and work within those
art parameters) but in running it through a few testers, it showed up that
it also needed the background screen depth to be 16 bits per pixels (All
tests on windows machines of various types, but every one that had it set
to 32 bits per pixel in the display console had this problem, which
vanished when setting that to 16). So my first question is there any ways
to fix this? What ways would the list suggest? Theoretically there must
be a way to ask for the whole context to switch to that but that seems a
rough way to achieve that.


Martin - http://akawaka.csn.ul.ie/
Bother! said Pooh, as he failed another melee combat roll.

Sometimes shadow surfaces might be
suitable, but for most applications using a video mode which doesn’t
require a shadow surface and converting all surfaces with
SDL_DisplayFormat will be your best solution.

I thought this sounded a little ambiguous…

To clarify, using a shadow surface will be slower than not using it.
And you should convert all your image data using SDL_DisplayFormat.
Remember that SDL_DisplayFormat creates a NEW surface containing the
converted image data. So remember to SDL_FreeSurface the original
images after converting them.On Oct 3, 2004, at 1:28 PM, Martin Donlon wrote:

Sometimes shadow surfaces might be
suitable, but for most applications using a video mode which doesn’t
require a shadow surface and converting all surfaces with
SDL_DisplayFormat will be your best solution.

I thought this sounded a little ambiguous…
Yes it was and deliberately so.

To clarify, using a shadow surface will be slower than not using it.
Not true, it depends on how the screen surface is going to be used.
With a 16bpp shadow surface and a 32bpp native screen surface SDL has to
do a conversion for 16->32 bpp each frame, which is expensive, however
since you are writing 16bpp data to the shadow surface your blits will
be faster than if you were writing to a 32bpp surface. So if you are
doing a lot of overdraw or alpha blending the cost of writing 32bpp pixel
data might be higher than the cost of the conversion.

So you can’t just assume that using a shadow surface will be slower, you
have to consider how it is going to be used and profile.On Sun, Oct 03, 2004 at 06:39:00PM -0400, Donny Viszneki wrote:

On Oct 3, 2004, at 1:28 PM, Martin Donlon wrote:


Martin - http://akawaka.csn.ul.ie/
“Bother,” said Pooh as he stepped into the particle accelerator.

Sometimes shadow surfaces might be
suitable, but for most applications using a video mode which doesn’t
require a shadow surface and converting all surfaces with
SDL_DisplayFormat will be your best solution.

I thought this sounded a little ambiguous…
Yes it was and deliberately so.
This remark is definitely more confusing than what I replied to (unless
everyone on this list knows something about manufactured ambiguity that
I don’t.)

To clarify, using a shadow surface will be slower than not using it.
Not true, it depends on how the screen surface is going to be used.
With a 16bpp shadow surface and a 32bpp native screen surface SDL has
to
do a conversion for 16->32 bpp each frame, which is expensive, however
since you are writing 16bpp data to the shadow surface your blits will
be faster than if you were writing to a 32bpp surface. So if you are
doing a lot of overdraw or alpha blending the cost of writing 32bpp
pixel
data might be higher than the cost of the conversion.

Haha… looks like I created a bit of ambiguity of my own. I should have
said that the situation where you’d want to use a shadow surface will
be slower than were you using game media in the native screen format.On Oct 3, 2004, at 11:37 PM, Martin Donlon wrote:

On Sun, Oct 03, 2004 at 06:39:00PM -0400, Donny Viszneki wrote:

On Oct 3, 2004, at 1:28 PM, Martin Donlon wrote: