SDL_GetVideoInfo( )

Hi

I’m using SDL with openGL, and i’m trying to view hardware info thru
SDL_GetVideoInfo( ).

const SDL_VideoInfo* info = NULL;

SDL_Init(SDL_INIT_VIDEO)

info = SDL_GetVideoInfo( );
cout<<info->hw_available<<endl;
cout<<info->wm_available<<endl;
cout<<info->blit_hw<<endl;
cout<<info->blit_hw_CC<<endl;
cout<<info->blit_hw_A<<endl;
cout<<info->blit_sw<<endl;
cout<<info->blit_sw_CC<<endl;
cout<<info->blit_sw_A<<endl;
cout<<info->blit_fill<<endl;
cout<<info->video_mem<<endl;
cout<<info->vfmt->BitsPerPixel<<endl;

something like this. the output is
0
1
0
0
0
0
0
0
0
0
(there should be BPP value, but it’s not)

So I haven’t any idea what is wrong.
I’m using ATI Radeon 9M, on Debian Linux, XFree 4.3,
direct rendering on
(glxinfo: direct rendering: Yes)
native linux 2.6.3 driver, not FireGL ati driver.
I looked for searchable sdl mailing list archive but found none, is
there any ?

Thanks, have a nice day
Jan–
registered user no. 207163 using Debian
icq: 78209478
home: linuxdesktop.kn.vutbr.cz

Jan Bernatik wrote:

Hi

I’m using SDL with openGL, and i’m trying to view hardware info thru
SDL_GetVideoInfo( ).

const SDL_VideoInfo* info = NULL;

SDL_Init(SDL_INIT_VIDEO)

info = SDL_GetVideoInfo( );
cout<<info->hw_available<<endl;
cout<<info->wm_available<<endl;
cout<<info->blit_hw<<endl;
cout<<info->blit_hw_CC<<endl;
cout<<info->blit_hw_A<<endl;
cout<<info->blit_sw<<endl;
cout<<info->blit_sw_CC<<endl;
cout<<info->blit_sw_A<<endl;
cout<<info->blit_fill<<endl;
cout<<info->video_mem<<endl;
cout<<info->vfmt->BitsPerPixel<<endl;

something like this. the output is
0
1
0
0
0
0
0
0
0
0
(there should be BPP value, but it’s not)

Before having a bpp value, you need to set the video mode.
Also, most blits/capabilities aren’t accelerated under X11 anyway so you
won’t get much acceleration (unless you use the upcoming “we hope it’s
soon ready” glSDL backend).

So I haven’t any idea what is wrong.
I’m using ATI Radeon 9M, on Debian Linux, XFree 4.3,
direct rendering on
(glxinfo: direct rendering: Yes)
native linux 2.6.3 driver, not FireGL ati driver.

OpenGL accleration capabilites have nothing to do with the flags you are
displaying. Those flags reflect 2D acceleration capabilities.

I looked for searchable sdl mailing list archive but found none, is
there any ?

Through gmane maybe :
http://news.gmane.org/gmane.comp.lib.sdl

Stephane

Before having a bpp value, you need to set the video mode.
Also, most blits/capabilities aren’t accelerated under X11 anyway so you
won’t get much acceleration (unless you use the upcoming “we hope it’s
soon ready” glSDL backend).

actually I set video mode before SDL_GetVideoInfo( ) with:

flags = SDL_OPENGL;
SDL_SetVideoMode(X_WindowSize, Y_WindowSize, 16, flags)

but still I don’t have bpp value, which seems strange to me.

Jan–
registered user no. 207163 using Debian
icq: 78209478
home: linuxdesktop.kn.vutbr.cz

Jan Bernatik wrote:

actually I set video mode before SDL_GetVideoInfo( ) with:

flags = SDL_OPENGL;
SDL_SetVideoMode(X_WindowSize, Y_WindowSize, 16, flags)

but still I don’t have bpp value, which seems strange to me.

Jan

I can reproduce this bug, but this is definitely not an SDL problem,
since the following non-SDL program has the same erratic behaviour :

#include <iostream.h>
typedef unsigned char Uint8;

main()
{
Uint8 t=3;
cout << t <<endl;
}

It seems Uint8 can’t be printed with iostreams ?
(I’m using gcc version 3.3.1 (Mandrake Linux 9.2 3.3.1-2mdk))

There’s probably a bug somewhere else… What compiler version/system
are you using ?

Stephane

Uint8 is a character type, and is therefore output as a character. If
you want it to be displayed as a number, cast it to unsigned int first.

-Willem JanOn Wed, 2004-03-17 at 23:30, Stephane Marchesin wrote:

It seems Uint8 can’t be printed with iostreams ?
(I’m using gcc version 3.3.1 (Mandrake Linux 9.2 3.3.1-2mdk))

I can reproduce this bug, but this is definitely not an SDL problem,
since the following non-SDL program has the same erratic behaviour :

#include <iostream.h>
typedef unsigned char Uint8;

main()
{
Uint8 t=3;
cout << t <<endl;
}

It seems Uint8 can’t be printed with iostreams ?
(I’m using gcc version 3.3.1 (Mandrake Linux 9.2 3.3.1-2mdk))

There’s probably a bug somewhere else… What compiler version/system
are you using ?

Hmm that’s true. Why I haven’t tried it mysefl ?

g++ version 2.95.4
stable Debian. I will inversigate some more, perhaps in g++ forums.
thanks a lot for your time.

J.

Jan Bernatik wrote:

Hmm that’s true. Why I haven’t tried it mysefl ?

g++ version 2.95.4
stable Debian. I will inversigate some more, perhaps in g++ forums.
thanks a lot for your time.

J.

Well, Willem Jan Palenstijn has a nice answer for this, you could
"investigate" it :slight_smile:
(I think we both got mislead by the name Uint8 given to something that’s
really a char)

Stephane

That type is in fact wrong, because char can be 16 bits although it
mostly is 8 bits. So it should be unsigned byte not char.

So if SDL defines Uint8 as unsigned char then it is wrong, because on
some compiler environment char might be equal to wchar, which is 16
bits.On Thursday 18 March 2004 16:20, Jan Bernatik wrote:

I can reproduce this bug, but this is definitely not an SDL
problem, since the following non-SDL program has the same erratic
behaviour :

#include <iostream.h>
typedef unsigned char Uint8;

main()
{
Uint8 t=3;
cout << t <<endl;
}

It seems Uint8 can’t be printed with iostreams ?
(I’m using gcc version 3.3.1 (Mandrake Linux 9.2 3.3.1-2mdk))

There’s probably a bug somewhere else… What compiler
version/system are you using ?

Hmm that’s true. Why I haven’t tried it mysefl ?

g++ version 2.95.4
stable Debian. I will inversigate some more, perhaps in g++ forums.
thanks a lot for your time.

C has no “byte” type. Since SDL is meant for both C and C++, how else should
SDL define it?

-Sean RidenourOn Monday 22 March 2004 04:37 pm, Sami N??t?nen wrote:

On Thursday 18 March 2004 16:20, Jan Bernatik wrote:

I can reproduce this bug, but this is definitely not an SDL
problem, since the following non-SDL program has the same erratic
behaviour :

#include <iostream.h>
typedef unsigned char Uint8;

main()
{
Uint8 t=3;
cout << t <<endl;
}

It seems Uint8 can’t be printed with iostreams ?
(I’m using gcc version 3.3.1 (Mandrake Linux 9.2 3.3.1-2mdk))

There’s probably a bug somewhere else… What compiler
version/system are you using ?

Hmm that’s true. Why I haven’t tried it mysefl ?

g++ version 2.95.4
stable Debian. I will inversigate some more, perhaps in g++ forums.
thanks a lot for your time.

That type is in fact wrong, because char can be 16 bits although it
mostly is 8 bits. So it should be unsigned byte not char.

So if SDL defines Uint8 as unsigned char then it is wrong, because on
some compiler environment char might be equal to wchar, which is 16
bits.

[…]

This is getting quite a bit off-topic, but…

That type is in fact wrong, because char can be 16 bits although it
mostly is 8 bits. So it should be unsigned byte not char.

While technically this is true and the standard allows the “char” type
to be bigger than 8 bits, it’s very rare for a compiler to do that (in
fact I haven’t see any doing that so far, I’d be very interested to
learn if you have any specifically in mind). Instead, the wchar_t type
should be used to represent multibyte (mb) characters. See also the ISO
specification and
http://www.unix-systems.org/version2/whatsnew/login_mse.html

So if SDL defines Uint8 as unsigned char then it is wrong, because on
some compiler environment char might be equal to wchar, which is 16
bits.

C has no “byte” type. Since SDL is meant for both C and C++, how else
should
SDL define it?

In ISO C, you can use stdint.h which provides types with guaranteed bit
size (as well as types with a certain minimal bit size etc.). See also
http://www.opengroup.org/onlinepubs/007904975/basedefs/stdint.h.html

Sadly not all C compilers provide this header. However, if you have a
"configure" script, you can just use those types if stdint.h is
present, else fall back to the classic definitions of the
uint8/sint16/etc. type. Or you can directly check in your configure
script for the size of various types.

Cheers,

MaxAm 24.03.2004 um 05:34 schrieb Sean Ridenour:

On Monday 22 March 2004 04:37 pm, Sami N??t?nen wrote:

That type is in fact wrong, because char can be 16 bits although it
mostly is 8 bits. So it should be unsigned byte not char.

While technically this is true and the standard allows the “char” type
to be bigger than 8 bits, it’s very rare for a compiler to do that (in
fact I haven’t see any doing that so far, I’d be very interested to
learn if you have any specifically in mind). Instead, the wchar_t type
should be used to represent multibyte (mb) characters. See also the ISO
specification and
http://www.unix-systems.org/version2/whatsnew/login_mse.html

if this were true, would it also be true that sizeof(char)==2? In which
case all hell is going to break loose…

if this were true, would it also be true that sizeof(char)==2? In which
case all hell is going to break loose…

No. In the standard, sizeof() is said to return multiples of
sizeof(char). So, by definition, sizeof(char) is always 1. If the bits
change, that’s visible in the constant CHAR_BITS.

And SDL doesn’t have to use the same headers on every platform. SDL
types [SU]int{8,16,32} are mapped to whatever are of the correct size on
the platform and compiler, and that’s even checked in the header, see
SDL_types.h. It would of course be difficult to port SDL to a platform
where CHAR_BITS is 16, but if there were a special compiler type for an
8 bit type, that would be used for [SU]int8.–
Petri Latvala

if this were true, would it also be true that sizeof(char)==2? In
which case all hell is going to break loose…

AFAIK, by definition sizeof(char)==1.On 24/03/2004, David Morse, you wrote:


Please remove “.ARGL.invalid” from my email when replying.
Incoming HTML mails are automatically deleted.

AFAIK, by definition sizeof(char)==1.

Nope. The standards say “at least 8 bits”.

Ing. Gabriel Gambetta
ARTech - GeneXus Development Team
ggambett at artech.com.uy

Quoting Olivier Fabre :> On 24/03/2004, David Morse, you wrote:

if this were true, would it also be true that sizeof(char)==2? In
which case all hell is going to break loose…

AFAIK, by definition sizeof(char)==1.

Bjarne Stroustrup mentions in his C++ book that sizeof(char) is guaranteed to be
1 in C++ on any architecture - I presume the same must be true for C.
Everything else can be any size, the only guarantees are that sizeof(char) == 1
and also the ordering of size (along the lines of sizeof(char) <= sizeof(short)
<= sizeof(long))

This doesn’t contradict what he said. sizeof does not return bits. (It
would make no sense for sizeof(char) to be anything but 1.)On Wed, Mar 24, 2004 at 01:50:48PM -0300, Gabriel Gambetta wrote:

AFAIK, by definition sizeof(char)==1.

Nope. The standards say “at least 8 bits”.


Glenn Maynard

Neil Brown wrote:

Quoting Olivier Fabre :

if this were true, would it also be true that sizeof(char)==2? In
which case all hell is going to break loose…

AFAIK, by definition sizeof(char)==1.

Bjarne Stroustrup mentions in his C++ book that sizeof(char) is guaranteed to be
1 in C++ on any architecture - I presume the same must be true for C.
Everything else can be any size, the only guarantees are that sizeof(char) == 1
and also the ordering of size (along the lines of sizeof(char) <= sizeof(short)
<= sizeof(long))

Ok, supposing sizeof(char) == 1 and CHAR_BITS = 16, and the
implementation defined integer type _byte exists, and is 8 bits.
Mallocing a _byte is going to be tough:

malloc( 1 * sizeof (_byte) ); // sizeof == 0.5 ?!

If you round sizeof (byte) up to one,then you can’t put _bytes in an
efficient array, lest this fail:

_byte this_bytes[4];
assert( sizeof (this_bytes) == 4 * sizeof (_byte) )

!>>On 24/03/2004, David Morse, you wrote:

Neil Brown wrote:

Quoting Olivier Fabre :

if this were true, would it also be true that sizeof(char)==2? In
which case all hell is going to break loose…

AFAIK, by definition sizeof(char)==1.

Bjarne Stroustrup mentions in his C++ book that sizeof(char) is
guaranteed to be 1 in C++ on any architecture - I presume the same
must be true for C. Everything else can be any size, the only
guarantees are that sizeof(char) == 1 and also the ordering of size
(along the lines of sizeof(char) <= sizeof(short) <= sizeof(long))

Ok, supposing sizeof(char) == 1 and CHAR_BITS = 16, and the
implementation defined integer type _byte exists, and is 8 bits.
Mallocing a _byte is going to be tough:

malloc( 1 * sizeof (_byte) ); // sizeof == 0.5 ?!

If you round sizeof (byte) up to one,then you can’t put _bytes in an
efficient array, lest this fail:

_byte this_bytes[4];
assert( sizeof (this_bytes) == 4 * sizeof (_byte) )

There most likely will not be any possibility to have 8 bit integers in
system that has CHAR_BITS == 16 or the char would have been defined to
that size. Besides all the compilers that might have had 16 bit chars
are extinct by now I think. But this is more like a case which shows
how to make holes in the definitions of programming languages.On Monday 29 March 2004 21:59, David Morse wrote:

On 24/03/2004, David Morse, you wrote:

There most likely will not be any possibility to have 8 bit integers in
system that has CHAR_BITS == 16 or the char would have been defined to
that size. Besides all the compilers that might have had 16 bit chars
are extinct by now I think. But this is more like a case which shows
how to make holes in the definitions of programming languages.

This isn’t a case of compilers, this is a case of platforms. On some
computers, past or future, the minimum addressable byte is 16 bits. On
some, it’s 9 bits. If you’re going to have a compiler on those, they
ought to have 16 bit char and 9 bit char, respectively.

C’s rules about type sizes are flexible just for this. It was designed
to be a “portable” language for systems programming, and portability
means just that, not casting numbers in stone. (Of course, this approach
requires that your program doesn’t rely on any of those numbers.)–
Petri Latvala

Petri Latvala wrote:>>There most likely will not be any possibility to have 8 bit integers in

system that has CHAR_BITS == 16 or the char would have been defined to
that size. Besides all the compilers that might have had 16 bit chars
are extinct by now I think. But this is more like a case which shows
how to make holes in the definitions of programming languages.

This isn’t a case of compilers, this is a case of platforms. On some
computers, past or future, the minimum addressable byte is 16 bits. On
some, it’s 9 bits. If you’re going to have a compiler on those, they
ought to have 16 bit char and 9 bit char, respectively.

C’s rules about type sizes are flexible just for this. It was designed
to be a “portable” language for systems programming, and portability
means just that, not casting numbers in stone. (Of course, this approach
requires that your program doesn’t rely on any of those numbers.)


Petri Latvala

I’m having one heck of a time compiling piave on a mandrake 10.0 system. Although i’ve installed SDL with the mandrake install, it says it cannot find sdl-config. After doind a find and a locate, it’s just not there. I tried to re-install sdl from the rpm binaries on libsdl.org, but it still isn’t created. Any ideas?


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl