SDL_HWSURFACE not working?

Hello!

I’m new to SDL, having just migrated from CDX to the promised land of
multi-platform, and I’m having some major performance problems. I’m using
MSVC6.0, win2ksp3, DX9, and a 64MB GF3 (also tried this on another machine
with a 32MB GF2) So, first thing, I’m making sure all my surfaces are in
video RAM, but somehow they don’t want to go there…

This here test:

SDL_Surface* Test = SDL_CreateRGBSurface( SDL_HWSURFACE, 640, 480, 32,
0xFF, 0xFF00, 0xFF0000, 0xFF000000 );

if( ( Test->flags & SDL_HWSURFACE ) == SDL_HWSURFACE )
printf( “Test surface exists in video ram” );

Should tell me that the surface is in video RAM, no? Well, it doesn’t, and
being such a noob I cannot figure out why… I’ve done some searching in
the list archives but couldn’t really find anything relevant to this
situation. Same thing when loading bitmaps from disk, doing
SDL_ConvertSurface() to a new pointer and specifying SDL_HWSURFACE doesn’t
work either.

Any help would be much appreciated!

Thanks!

/Fredrik

Hello!

I’m new to SDL, having just migrated from CDX to the promised land of
multi-platform, and I’m having some major performance problems. I’m using
MSVC6.0, win2ksp3, DX9, and a 64MB GF3 (also tried this on another machine
with a 32MB GF2) So, first thing, I’m making sure all my surfaces are in
video RAM, but somehow they don’t want to go there…

This here test:

SDL_Surface* Test = SDL_CreateRGBSurface( SDL_HWSURFACE, 640, 480, 32,
0xFF, 0xFF00, 0xFF0000, 0xFF000000 );

if( ( Test->flags & SDL_HWSURFACE ) == SDL_HWSURFACE )
printf( “Test surface exists in video ram” );

Have you set the video mode before doing this? The video mode MUST be
set first. Also, it is a very good idea to pull the bits/pixel value and
the field masks out of the SDL_PixelFormat stored in the screen surface
structure so that you can be sure they are the same. And, of course,
make sure that the screen is actually a hardware surface. SDL gives you
the pixel depth you ask for, even if it has to emulate it in software.

	Bob PendletonOn Tue, 2003-03-04 at 14:41, Fredrik wrote:

Should tell me that the surface is in video RAM, no? Well, it doesn’t, and
being such a noob I cannot figure out why… I’ve done some searching in
the list archives but couldn’t really find anything relevant to this
situation. Same thing when loading bitmaps from disk, doing
SDL_ConvertSurface() to a new pointer and specifying SDL_HWSURFACE doesn’t
work either.

Any help would be much appreciated!

Thanks!

/Fredrik


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-----------------------------------+

At 15:35 2003-03-04 -0600, you wrote:

Have you set the video mode before doing this? The video mode MUST be
set first. Also, it is a very good idea to pull the bits/pixel value and
the field masks out of the SDL_PixelFormat stored in the screen surface
structure so that you can be sure they are the same. And, of course,
make sure that the screen is actually a hardware surface. SDL gives you
the pixel depth you ask for, even if it has to emulate it in software.

            Bob Pendleton

Hello again!

Thanks for your quick answer!

Yes, I set the video mode prior to conducting that test. The test has
evolved since last time, though, and I’m now actually getting the
respectable 75 fps I expect from a 75Hz fullscreen. Some confusion still
remains, however.

This is some log output from my test program:

Logfile | Wed Mar 05 00:36:38 2003—
Screen BPP: 32
Screen Masks RGBA: FF0000 FF00 FF 0
Screen exists in video ram!

test-1.png BPP: 32
test-1.png Masks RGBA: FF0000 FF00 FF 0
test-1.png exists in video ram!

test-3.png BPP: 32
test-3.png Masks RGBA: FF0000 FF00 FF FF000000

FPS: 76

As you might guess from the above, test-3 has an alpha channel while test-1
does not.

In my bitmap loading code I have this:

if( pTempSurf->format->Amask == 0 )
m_pSurface = SDL_DisplayFormat( pTempSurf );
else
m_pSurface = SDL_DisplayFormatAlpha( pTempSurf );

This yields the desired result: Full framerate and an alpha-channeled
surface on top. However, I noticed that my log does not report test-3 as
being in video ram, still I get full framerate. Very confusing. So, for
testing purposes, I changed the code to always use:

m_pSurface = SDL_DisplayFormat( pTempSurf );

This results in test-3 being opaque and retaining high framerate, but it
still will not make it a SDL_HWSURF, even though my log reports its masks
and BPP to be identical to those of the screen.

Next test was to change the above line into:

m_pSurface = SDL_DisplayFormatAlpha( pTempSurf );

The alpha channel is still there but framerate drops to a choppy 25 and
none of the two surfaces are in video RAM anymore. I assume this is because
both now report their Amask to be FF000000.

Now, what I would like to know is: If I have a 32 bit screen mode, how come
SDL_DisplayFormatAlpha() does not create hardware surfaces from 32-bit
images? Is it even possible to put a surface with an alpha channel into
video RAM?

Thank you for bearing with me!

Cheers!

/Fredrik

At 15:35 2003-03-04 -0600, you wrote:

Have you set the video mode before doing this? The video mode MUST be
set first. Also, it is a very good idea to pull the bits/pixel value and
the field masks out of the SDL_PixelFormat stored in the screen surface
structure so that you can be sure they are the same. And, of course,
make sure that the screen is actually a hardware surface. SDL gives you
the pixel depth you ask for, even if it has to emulate it in software.

            Bob Pendleton

Hello again!

Thanks for your quick answer!

Yes, I set the video mode prior to conducting that test. The test has
evolved since last time, though, and I’m now actually getting the
respectable 75 fps I expect from a 75Hz fullscreen. Some confusion still
remains, however.

At this point I am just guessing, so take it for what it is worth.

It looks to me that even though you asked for a 32 bit video mode you
did not get a video mode with an alpha channel (alpha mask is 0). So,
when you ask for an incompatible video mode for your image (alpha mask
is FF000000 which is not equal to 0) SDL is giving you and emulated 32
bit RGBA surface.

This is one where you either need an answer from Sam or you need to go
read the source code.

	Bob PendletonOn Tue, 2003-03-04 at 18:27, Fredrik wrote:

This is some log output from my test program:

Logfile | Wed Mar 05 00:36:38 2003

Screen BPP: 32
Screen Masks RGBA: FF0000 FF00 FF 0
Screen exists in video ram!

test-1.png BPP: 32
test-1.png Masks RGBA: FF0000 FF00 FF 0
test-1.png exists in video ram!

test-3.png BPP: 32
test-3.png Masks RGBA: FF0000 FF00 FF FF000000

FPS: 76

As you might guess from the above, test-3 has an alpha channel while test-1
does not.

In my bitmap loading code I have this:

if( pTempSurf->format->Amask == 0 )
m_pSurface = SDL_DisplayFormat( pTempSurf );
else
m_pSurface = SDL_DisplayFormatAlpha( pTempSurf );

This yields the desired result: Full framerate and an alpha-channeled
surface on top. However, I noticed that my log does not report test-3 as
being in video ram, still I get full framerate. Very confusing. So, for
testing purposes, I changed the code to always use:

m_pSurface = SDL_DisplayFormat( pTempSurf );

This results in test-3 being opaque and retaining high framerate, but it
still will not make it a SDL_HWSURF, even though my log reports its masks
and BPP to be identical to those of the screen.

Next test was to change the above line into:

m_pSurface = SDL_DisplayFormatAlpha( pTempSurf );

The alpha channel is still there but framerate drops to a choppy 25 and
none of the two surfaces are in video RAM anymore. I assume this is because
both now report their Amask to be FF000000.

Now, what I would like to know is: If I have a 32 bit screen mode, how come
SDL_DisplayFormatAlpha() does not create hardware surfaces from 32-bit
images? Is it even possible to put a surface with an alpha channel into
video RAM?

Thank you for bearing with me!

Cheers!

/Fredrik


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

±-----------------------------------+

Now, what I would like to know is: If I have a 32 bit screen mode, how come
SDL_DisplayFormatAlpha() does not create hardware surfaces from 32-bit
images? Is it even possible to put a surface with an alpha channel into
video RAM?

Nope, none of the native 2D API’s support alpha blending in hardware.
If you’re doing lots of alpha blending it may actually be faster to either
use a 3D API like OpenGL or do the blending to a software surface and then
blit the entire thing to the screen (which is automatically done if you
don’t specify SDL_HWSURFACE when setting a video mode)

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment

Fredrik wrote:

Now, what I would like to know is: If I have a 32 bit screen mode, how
come SDL_DisplayFormatAlpha() does not create hardware surfaces from
32-bit images? Is it even possible to put a surface with an alpha
channel into video RAM?

As far as I remember SDL is not (yet?) able to do hardware blit with
alpha on ANY target. So if you use alpha the better choice is to keep
EVERY surface in system ram and then blit the final result (or only the
dirty rects) to videoram, otherwise you’ll get big performance loss
because to do alpha on an HW surface you should read the video ram that
is a VERY slow operation.

Bye,
Gabry

At 22:34 2003-03-05 -0800, you wrote:

Nope, none of the native 2D API’s support alpha blending in hardware.
If you’re doing lots of alpha blending it may actually be faster to either
use a 3D API like OpenGL or do the blending to a software surface and then
blit the entire thing to the screen (which is automatically done if you
don’t specify SDL_HWSURFACE when setting a video mode)

Thank you and thanks to everyone that answered!

Looks like I’ll be going the OpenGL route just to be on the safe (and fast)
side :wink:

Cheers!

/Fredrik

Sam Lantinga wrote:

Nope, none of the native 2D API’s support alpha blending in hardware.

I thought some of the more recent versions of DirectDraw actually worked
through Direct3D, and thus did support hardware alpha blits, because it
draws the surfaces by rendering quads. I have asked others if I am right
and I have gotten many different answers.

It’s a shame that there isnt more hardware support for 2D graphics. The
fraction of graphics cards that even can support alpha blits, rotation,
and scaling without just using 3D quads, is so small its frightening. At
least from what I have seen.

Calvin Spealman wrote:

Sam Lantinga wrote:

Nope, none of the native 2D API’s support alpha blending in hardware.

I thought some of the more recent versions of DirectDraw actually
worked through Direct3D, and thus did support hardware alpha blits,
because it draws the surfaces by rendering quads. I have asked others
if I am right and I have gotten many different answers.

There isn’t any “DirectDraw via Direct3D” functionality; rather,
Microsoft attempted to make 2D programmers migrate to Direct3D by
calling it DirectGraphics and pretending it included 2D functionality :wink:
DirectDraw was left at version 7 as far as I know. You can get something
that vaguely resembles a sane 2D API by using the D3DX utility library
with DirectGraphics, from what I can remember. But this is all in DX8
and DX9 which SDL is not aimed at right now.–
Kylotan
http://pages.eidosnet.co.uk/kylotan

[…]

It’s a shame that there isnt more hardware support for 2D graphics.
The fraction of graphics cards that even can support alpha blits,
rotation, and scaling without just using 3D quads, is so small its
frightening. At least from what I have seen.

Well, maybe - but OTOH, do you really need a dedicated 2D API with
it’s own drivers? Why, and why avoid having various transformations
and stuff, just because we’re doing 2D? I think the strict separation
between 2D and 3D rendering is a thing of the past, as it no longer
has strong technical motivations.

IMNSHO, development time is better spent improving the 3D acceleration
and making the 3D drivers more complete and compliant. Then we can
just use 2D rendering layers (glSDL style) on top of that. I don’t
see any real problems with that, except possibly that some consumer
3D cards still seem to lack some features that one usually expects
from 2D APIs. (This is pretty much stuff that only matters to GUI
toolkits and things like that, though. Your average action game can
and usually will just repaint the whole screen every frame anyway.)

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Saturday 08 March 2003 15.29, Calvin Spealman wrote:

David Olofson wrote:

Well, maybe - but OTOH, do you really need a dedicated 2D API with
it’s own drivers? Why, and why avoid having various transformations
and stuff, just because we’re doing 2D? I think the strict separation
between 2D and 3D rendering is a thing of the past, as it no longer
has strong technical motivations.

IMNSHO, development time is better spent improving the 3D acceleration
and making the 3D drivers more complete and compliant. Then we can
just use 2D rendering layers (glSDL style) on top of that. I don’t
see any real problems with that, except possibly that some consumer
3D cards still seem to lack some features that one usually expects
from 2D APIs. (This is pretty much stuff that only matters to GUI
toolkits and things like that, though. Your average action game can
and usually will just repaint the whole screen every frame anyway.)

Nice little plug for your precious glSDL, eh? hehe, i joke.

I partially agree, but I just dont like when people seem to forget about
2D graphics. And so, in retaliation, I say things like I did in hopes of
rallying the troops to save enthusiasm for 2D graphics. I think that
explained things…

But you are correct the matters of 2D over 3D, which is ironic since its
displayed in 2D in the end anyway. But, does this mean completely
forgetting the true 2D surfaces we use in light of Quads? The issue also
lies in the realm of KISS (keep it simple, stupid.) Are we needlessly
complicating things? I believe so. However, I can’t be sure which way is
the more complicated set up.

There is another issue I’ll bring up just because it popped into my
head… There are real reasons for keeping the true 2D structures we use
today, such as non game uses. Even most 2D games need 3D acceleration
for rotating, scaling, alpha, etc. Now, I do believe those things should
be supported in 2D, but eh what to do? And of course, we should also all
realize that OpenGL is not a 3D only API. It was designed with 2D
graphics in mind and that is something we should never forget. One of
the big differences is coordinate systems, going full 3D hardware
actually ommits caring about screen size. you just draw relatively. That
is a plus in many ways. I hope someday to see systems where you never
need to care about resolution.

Im ranting. I’ll stop.

[…]

Nice little plug for your precious glSDL, eh? hehe, i joke.

Well, glSDL is a result of a need and my position, rather than the
other way around. :slight_smile:

I partially agree, but I just dont like when people seem to forget
about 2D graphics. And so, in retaliation, I say things like I did
in hopes of rallying the troops to save enthusiasm for 2D graphics.
I think that explained things…

I agree that forgetting 2D is a bad thing, but then I tend to think of
it in terms of “2D projection”. The underlying technique has been
changing for as long as I’ve been playing around with computers (some
18 years), and it’s still changing.

It started with very primitive “enhanced text modes” and the like.
Then came the real hardware sprites, to make up for the fact that
CPUs were way too slow to animate anything on the pixel level.
Eventually, some platforms started using specially designed hardware
blitters in combination with, or instead of sprites. Eventually, CPUs
became so fast that they made both solutions obsolete through
performance and flexibility. Then real time 3D graphics boomed, and
with it came blending effects, image scaling, antialiasing and
texturing - and CPUs were again too slow. The age of 3D accelerators
begun.

This is where we are now, and the way it looks now, I think it will be
quite some time before we’re back with some form of s/w rendering.

(The closest to that available to “normal” users is the Wildcat VP,
which is based on arrays of very simple “CPU” cores in a network.
This isn’t exactly your average processor, but it is pretty much
fully programmable.)

Anyway, whether you’re doing 2D or 3D games is secondary to what
hardware you’re using. There were various kinds of 3D games based on
vector displays (before the array based displays we use these days),
(pre)scaled sprites, s/w or h/w rendered wireframe into array based
displays and whatnot. We’re still using textured polygons, which are
just another approximation, strongly rooted in traditional 2D
rendering techniques.

But you are correct the matters of 2D over 3D, which is ironic
since its displayed in 2D in the end anyway.

Exactly. It’s only the API and the way some of the hardware is used
that makes it 3D oriented in any way. OpenGL just happens to have 4x4
matrices and 4D coordinates, because that works nicely with anything
up to 3D. Then it (and GLU) has some rather 3D specific features, but
that doesn’t make OpenGL less suitable for 2D in any way.

But, does this mean
completely forgetting the true 2D surfaces we use in light of
Quads?

Well, what do you want with it? The scaling and filtering is built
into the hardware, so it doesn’t impact performance if you use it.
That doesn’t mean you have to use it.

The issue also lies in the realm of KISS (keep it simple,
stupid.) Are we needlessly complicating things? I believe so.
However, I can’t be sure which way is the more complicated set up.

2D gets pretty complicated as well, if you want to do anything
interesting at all. Big deal if there are Z coordinates and other
features you never use… Lots of people want 3D, so there’s a big
industry around it. If we only want to use part of what the resulting
drivers and hardware can do, fine. It’s not our problem that the
subsystems are overkill for what we want to do. (If they are, that
is. I don’t think they will be in future “2D” games, whether their
projections are fully 2D or not.)

There is another issue I’ll bring up just because it popped into my
head… There are real reasons for keeping the true 2D structures
we use today, such as non game uses.

Really? Where do you draw the line between which transformations are
legal in a 2D API, and which ones are not?

Even most 2D games need 3D
acceleration for rotating, scaling, alpha, etc. Now, I do believe
those things should be supported in 2D, but eh what to do?

Use OpenGL or Direct3D. :slight_smile:

If you design a 2D API that does everything advanced 2D apps need,
it’s going to be very similar to a 3D API. I mean, the step from
texturing 2D quads (where you can move the vertices around as you
like) to 3D is very small. On the rendering level, it’s just a matter
of perspective correct texture mapping… The rest is just matrix
operations, and those are pretty generic things both or video cards
and in CPUs (SIMD) these days.

And of
course, we should also all realize that OpenGL is not a 3D only
API. It was designed with 2D graphics in mind and that is something
we should never forget. One of the big differences is coordinate
systems, going full 3D hardware actually ommits caring about screen
size. you just draw relatively. That is a plus in many ways.

Yeah, this is one of the big advantages, but it also has some
issues…

I hope
someday to see systems where you never need to care about
resolution.

I don’t think that’s possible, except maybe if every display has a
few times higher resolution than any human eye, or if we ditch fixed
pixel clocks and horizontal scan lines for some other system.

Scaling means interpolation, and that means you get to face Nyqvist,
aliasing and other stuff, just like in audio - only here, we get it
all in (at least) two dimensions. Any rendition of a vector "scene"
is an approximation. If you scale bitmap graphics, the result will
always be inaccurate.

Even in 2048x1536, it matters how antialiasing and scaling is done,
and that’s about the highest resolution this $$,$$$ monitor can
handle. Not even the fastest accelerators around are quite fast
enough to get good frame rates with any recent game in that
resolution, so it’s not just about monitors. I think it will be quite
a few years until display resolution is something you just set and
forget.

For 18 years, I’ve heard over and over that “Optimization is a waste
of time! Next year, comupters will be fast enough.”

Damn long year, if you ask me…

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -’
http://olofson.nethttp://www.reologica.se —On Saturday 08 March 2003 22.26, Calvin Spealman wrote: