[OT] 64bit -> 32bit OpenGL

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is that
true?

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is that
true?

???
Ofcourse NOT!!!

Very nice, I’m not sure where I read that but I got very confused by that.

// Alexander BussmanOn Thursday 11 March 2004 12.36, <- Chameleon -> wrote:

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is
that true?

???
Ofcourse NOT!!!

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is that
true?

In 64bit CPUs C/C++ “int” is 64bit so you must typedef (or declare) GLuint,
GLint etc to 32bits (C/C++ long) inside <gl/gl.h>

This is not always so. For example, the 64 CPUs being released by AMD and
Intel still use 32 bit integers. Pointers (memory addresses) are now 64
bits. This can lead to problems when assuming an address is the same size
as an int, but for the most part works quite well.

OpenGL header files on 64 bit machines already properly declare the GLint,
so you need not worry about it. The problem that might be assumed, then,
is that code compiled on a 32 bit machine has pointers or integers of the
wrong size. 64 bit machines that have legacy 32 bit applications run them
in a different mode. In this mode, access to 64 bit features is not
available (but not needed because it is a 32 bit application). I know
this to be true on Sparc and Opteron.

James Best

“<- Chameleon ->”

Sent by: sdl-admin at libsdl.org

03/11/2004 06:39 AM
Please respond to sdl

TTo:
cc:

bcc:
Subject: Re: [SDL] [OT] 64bit -> 32bit OpenGL

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is
that
true?

In 64bit CPUs C/C++ “int” is 64bit so you must typedef (or declare)
GLuint,
GLint etc to 32bits (C/C++ long) inside <gl/gl.h>


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is
that
true?

In 64bit CPUs C/C++ “int” is 64bit so you must typedef (or declare)
GLuint,
GLint etc to 32bits (C/C++ long) inside <gl/gl.h>

I believe MIPS64 has this misfeature, but for all other 64-bit Linux
architectures, ILP (integer/long/pointer) = 488 (int is 32-bit and long
is 64-bit). I’ve also heard that Win64 has a 32-bit int and long,
which seems to be their mistake.

long long should be 64-bit everywhere, even on 32-bit architectures.

There are always exceptions to these rules (like MIPS64), but you have
to ask yourself how general you really need to be.

-HollisOn Mar 11, 2004, at 5:39 AM, <- Chameleon -> wrote:

This is not always so. For example, the 64 CPUs being released by AMD and Intel still use 32 bit integers.

I dont use gcc but I think, it uses 64-bit “int” type for 64-bit machines

Pointers (memory addresses) are now 64 bits. This can lead to problems when assuming an address is the same size as an int, but for the most part works quite well.

graphics card vendor, must recompile opengl.lib opengl.dll opengl.so for addresses

So, OpenGL is alive!

Actually, at the moment, it is true, at least on Linux.

There aren’t any 3D drivers that run in 64-bit mode that can run 32-bit
GL apps, but I’ve been told that this situation is being remedied.

It’s mostly a current issue and not a design flaw.

–ryan.On Thu, 2004-03-11 at 06:55, Alexander Bussman wrote:

On Thursday 11 March 2004 12.36, <- Chameleon -> wrote:

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is
that true?

???
Ofcourse NOT!!!

Very nice, I’m not sure where I read that but I got very confused by that.

I dont use gcc but I think, it uses 64-bit “int” type for 64-bit
machines

gcc on amd64 uses 32-bits for “int” and 64-bits for “long”. Visual C++
for amd64 uses 32-bits for both “int” and “long”, for legacy code
support.

Please don’t post misinformation; if you have to preface it by “I
think”, then double-check your facts, or you could be causing someone
big headaches when trying to be helpful.

–ryan.

I believe MIPS64 has this misfeature, but for all other 64-bit Linux
architectures, ILP (integer/long/pointer) = 488 (int is 32-bit and long
is 64-bit). I’ve also heard that Win64 has a 32-bit int and long,
which seems to be their mistake.

These aren’t misfeatures or mistakes. The C spec says that sizeof
(short) <= sizeof (int) <= sizeof (long), but doesn’t specify explicit
sizes for them.

Counting on the size of an intrinsic C/C++ type is a nonportable mistake
that is easily avoided if you know to do so.

There are always exceptions to these rules (like MIPS64), but you have
to ask yourself how general you really need to be.

You should use abstractions. In SDL, count on Uint32 to be 32 bits, or
use the newer ISO standard types.

–ryan.

Thanks for telling me, it probably was on a Linux forum somewhere I read it.
Now I know why that is true and it’s nice to read that it will get fixed too.On Monday 15 March 2004 01.45, Ryan C. Gordon wrote:

On Thu, 2004-03-11 at 06:55, Alexander Bussman wrote:

On Thursday 11 March 2004 12.36, <- Chameleon -> wrote:

Hello, a quick questiong.
I have read that a 64bit CPU cannot run 32bit OpenGL applications, is
that true?

???
Ofcourse NOT!!!

Very nice, I’m not sure where I read that but I got very confused by
that.

Actually, at the moment, it is true, at least on Linux.

There aren’t any 3D drivers that run in 64-bit mode that can run 32-bit
GL apps, but I’ve been told that this situation is being remedied.

It’s mostly a current issue and not a design flaw.

–ryan.