Hardware surfaces under linux

Hi,

I use SDL on Linux, with XFree4.1 and the latest NVidia drivers.
And I wonder if ther’s a way to get a Hardware surface with this config.

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

Do I have to use something like glSDL ?

Thank you–
Allergy
http://www.alrj.org

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

I’m able to get a video surface if I enable DGA (export SDL_VIDEODRIVER=DGA)

Bye,
Gabry (gabrielegreco at tin.it)

Hi,

I use SDL on Linux, with XFree4.1 and the latest NVidia drivers.
And I wonder if ther’s a way to get a Hardware surface with this
config.

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

You have to be root and use DGA - but I still can’t say for sure if it’s
possible with that configuration.

Do I have to use something like glSDL ?

Well, first of all, you shouldn’t really use glSDL for anything serious
in it’s current form. It’s a proof-of-concept hack, and not all that
useful, as it’s “on the wrong side” of SDL. (It’s a source level wrapper,
rather than a run time selectable backend.)

Anyway, glSDL may indeed give you a hardware surface - but not in the
same sense of the word as an SDL hardware screen surface. It just opens
up an OpenGL rendering context, and then translates all SDL blitting
calls into OpenGL calls. Every time you modify a surface, it’s OpenGL
texture(s) will be updated. When you blit a surface to the screen, glSDL
tells OpenGL to render one or more quads, textured with the data
converted and transferred from your surfaces. Works just fine (and fast!)
that far…

However, if you try to lock the screen surface for low level software
rendering, you’ll (eventually - it’s not implemented in the current
version) force glSDL to transfer the whole screen into a software buffer
that you can modify - and then it will be sent back into one or more
textures, and rendered as one or more textured quads. That is, it’s even
worse than SDL_OPENGLBLIT, just to be 100% compatible with “normal” SDL.

So, in short, if you’re going to blit mostly static surfaces (tiles,
sprites etc) to the screen, and draw filled rects, glSDL will work great.

If you’re going to do fullscreen, pixel-by-pixel software rendering,
glSDL will probably be slower than any normal SDL setup. Accelerated
graphics just doesn’t mix well with software rendering, and OpenGL isn’t
even designed to make the best of it - and glSDL can’t change that.

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 09 January 2002 17:11, allergy wrote:

Le Mercredi 9 Janvier 2002 19:27, vous avez ?crit :

Hi,

I use SDL on Linux, with XFree4.1 and the latest NVidia drivers.
And I wonder if ther’s a way to get a Hardware surface with this
config.

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

You have to be root and use DGA - but I still can’t say for sure if
it’s possible with that configuration.

I really don’t want to be root.
It’s baaaaaaad :slight_smile:

Do I have to use something like glSDL ?

Well, first of all, you shouldn’t really use glSDL for anything
serious in it’s current form. It’s a proof-of-concept hack, and not
all that useful, as it’s “on the wrong side” of SDL. (It’s a source
level wrapper, rather than a run time selectable backend.)

Actually, I wouldn’t have used it, but coded some 2D-over-opengl
functions instead, maybe inspired by yours :slight_smile:

[SNIP]

So, in short, if you’re going to blit mostly static surfaces (tiles,
sprites etc) to the screen, and draw filled rects, glSDL will work
great.

Yes, mostly, if not only…

No need to say this is a pitty that a user cannot get a simple HW
surface. Is it XFree- or SDL-related ?

Has anyone tried Evas (Enlightenment’s canevas) for rendering + SDL for
events ? ;-)> On Wednesday 09 January 2002 17:11, allergy wrote:


Allergy
http://www.alrj.org

Le Mercredi 9 Janvier 2002 19:27, vous avez ?crit :

Hi,

I use SDL on Linux, with XFree4.1 and the latest NVidia drivers.
And I wonder if ther’s a way to get a Hardware surface with this
config.

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

You have to be root and use DGA - but I still can’t say for sure if
it’s possible with that configuration.

I really don’t want to be root.
It’s baaaaaaad :slight_smile:

Well, you can try DirectX on Windows, if you like. There, you get the
privilege of blowing up the system without even having "PowerUser"
privileges - and sysadmins can’t keep you from doing it…! :smiley:

Do I have to use something like glSDL ?

Well, first of all, you shouldn’t really use glSDL for anything
serious in it’s current form. It’s a proof-of-concept hack, and not
all that useful, as it’s “on the wrong side” of SDL. (It’s a source
level wrapper, rather than a run time selectable backend.)

Actually, I wouldn’t have used it, but coded some 2D-over-opengl
functions instead, maybe inspired by yours :slight_smile:

Yeah, that’s probably a better idea, at least if you want to make use of
the extra bonus features that come with OpenGL.

glSDL isn’t something you program for - it’s something you use to speed
up existing SDL code without rewriting it. I can’t make it much more than
that, without extending the SDL API - and then code using the extensions
wouldn’t work without OpenGL anyway, without being actively aware of both
targets!

[SNIP]

So, in short, if you’re going to blit mostly static surfaces (tiles,
sprites etc) to the screen, and draw filled rects, glSDL will work
great.

Yes, mostly, if not only…

Why do you need a h/w surface, then? If you’re using any significant
amount of alpha blending, rendering directly to VRAM is slower than
rendering into a software buffer. And on targets with busmaster DMA,
software rendering directly to VRAM is much slower than rendering into
a system RAM buffer that is DMA blitted into VRAM every frame.

Finally, on targets with accelerated VRAM->VRAM blits, it shouldn’t
matter if you get a hard or soft surface. Surface->screen blits should
be accelerated blits, rather than software blits to an off-screen
software buffer.

HOWEVER, AFAIK, with SDL, if you get a software surface, all blitting
will be software, even if you never need to lock the screen surface. (Am
I right, or did I miss something?)

(See my post on SDL_NOSHADOW.)

No need to say this is a pitty that a user cannot get a simple HW
surface. Is it XFree- or SDL-related ?

It’s driver related. AFAIK, SDL will give you a real h/w surface whenever
it’s possible to do with the underlying API and driver.

Has anyone tried Evas (Enlightenment’s canevas) for rendering + SDL for
events ? :wink:

No… Does it handle some form of “textures” or pixmaps, like a higher
level version of the X protocol? If so, it could actually be quite
interesting - at least if it’s using h/w accelerated blits where possible.

Then again, if Evas can do it, so should SDL. (Again, see my SDL_NOSHADOW
proposal.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 09 January 2002 20:15, allergy wrote:

On Wednesday 09 January 2002 17:11, allergy wrote:

Le Mercredi 9 Janvier 2002 21:23, vous avez ?crit :

Le Mercredi 9 Janvier 2002 19:27, vous avez ?crit :

Hi,

I use SDL on Linux, with XFree4.1 and the latest NVidia
drivers. And I wonder if ther’s a way to get a Hardware surface
with this config.

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp,
… It always gives me a software surface.

You have to be root and use DGA - but I still can’t say for sure
if it’s possible with that configuration.

I really don’t want to be root.
It’s baaaaaaad :slight_smile:

Well, you can try DirectX on Windows, if you like. There, you get the
privilege of blowing up the system without even having "PowerUser"
privileges - and sysadmins can’t keep you from doing it…! :smiley:

eheh… You should see my old win box !
err, no, you couldn’t see anyhting, I took its RAM to put it in my
linux box :slight_smile:

So, in short, if you’re going to blit mostly static surfaces
(tiles, sprites etc) to the screen, and draw filled rects, glSDL
will work great.

Yes, mostly, if not only…

Why do you need a h/w surface, then? If you’re using any significant
amount of alpha blending, rendering directly to VRAM is slower than

Currently, no alpha chanel, but a source colorkey.

rendering into a software buffer. And on targets with busmaster DMA,
software rendering directly to VRAM is much slower than rendering
into a system RAM buffer that is DMA blitted into VRAM every frame.

Yes, software rendering would be slow. But I only deal with "static"
surfaces (not a single access to the buffers used), and if every
surface were hardware, in VRAM, I hope blits would be faster !

Finally, on targets with accelerated VRAM->VRAM blits, it shouldn’t
matter if you get a hard or soft surface. Surface->screen blits
should be accelerated blits, rather than software blits to an
off-screen software buffer.

HOWEVER, AFAIK, with SDL, if you get a software surface, all
blitting will be software, even if you never need to lock the screen
surface. (Am I right, or did I miss something?)

Whatever I try, screen->flags always keeps telling me I got a sofwtare
surface…

Has anyone tried Evas (Enlightenment’s canevas) for rendering + SDL
for events ? :wink:

No… Does it handle some form of “textures” or pixmaps, like a
higher level version of the X protocol? If so, it could actually be
quite interesting - at least if it’s using h/w accelerated blits
where possible.

Then again, if Evas can do it, so should SDL. (Again, see my
SDL_NOSHADOW proposal.)

I’ve just discovered it some days ago… For what I’ve seen, Evas may
use software, classic X11 or OpenGL.
BUT I don’t think it’s as portable as SDL :stuck_out_tongue:

I’m just waiting for an OpenGL backend in SDL. Used carefully, il could
rox :slight_smile: Stretched blits, rotations, even simple horizontal/vertical
flips, etc.
Hmm … I think I’m gonna write my own opengl-2D-engine :)> On Wednesday 09 January 2002 20:15, allergy wrote:

On Wednesday 09 January 2002 17:11, allergy wrote:


Allergy
http://www.alrj.org

[…]

Has anyone tried Evas (Enlightenment’s canevas) for rendering + SDL
for events ? :wink:

No… Does it handle some form of “textures” or pixmaps, like a
higher level version of the X protocol? If so, it could actually be
quite interesting - at least if it’s using h/w accelerated blits
where possible.

Then again, if Evas can do it, so should SDL. (Again, see my
SDL_NOSHADOW proposal.)

I’ve just discovered it some days ago… For what I’ve seen, Evas may
use software, classic X11 or OpenGL.
BUT I don’t think it’s as portable as SDL :stuck_out_tongue:

The OpenGL part could be pretty portable…

I’m just waiting for an OpenGL backend in SDL. Used carefully, il could
rox :slight_smile: Stretched blits, rotations, even simple horizontal/vertical
flips, etc.

Well, if you want anything extra, apart from the raw speed, glSDL is
not for you. As I said, I can’t make glSDL do much fun without extending
the SDL API in ways that can’t be backported to the other backends
without serious performance hits.

Sure, there could be “hidden” hooks for some library, like SGE or
whatever, to use instead of softwares tretching/rotation whenever the
OpenGL backend is used, but still, if you want serious OpenGL power,
use OpenGL directly.

Hmm … I think I’m gonna write my own opengl-2D-engine :slight_smile:

Speaking of engines; the one I use in Kobo Deluxe will eventually get
native OpenGL support. I’m not sure how far I’m going to take it (it has
to do more than 10 fps without OpenGL! ;-), but it’ll make better use of
OpenGL than glSDL can do.

(For example, there will be ultra smooth, sub pixel accurate scrolling of
tiled backgrounds. The engine already makes 8 fraction bits available for
sub pixel accuracy - but glSDL can’t make use of them.)

//David Olofson — Programmer, Reologica Instruments AB

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------------> http://www.linuxdj.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |-------------------------------------> http://olofson.net -'On Wednesday 09 January 2002 22:27, allergy wrote:

I’ve tried every combination of SDL_HWSURFACE, SDL_DOUBLEBUF,
SDL_HWACCEL, SDL_FULLSCREEN, beeing root or not, 16 or 24 bpp, …
It always gives me a software surface.

You have to be root and use DGA - but I still can’t say for sure if it’s
possible with that configuration.

It’s possible, but you have to use the non-accelerated ‘nv’ driver - if you
use it with nvidia’s ‘nvidia’ driver, X crashes (usually when returning from
the game), and keyboard and mouse stop working …

Also be sure to load ‘dga’ in XF86Config. Then just export
SDL_VIDEODRIVER=dga, be root (or suid stuff), and you’re off! (or, eh, on =)

Maybe an entry for this in the FAQ would be nice ?–
Trick


Linux User #229006 * http://counter.li.org

“There is no magic” - Nakor, magic user

dude… http://www.libsdl.org/faq/FAQ-Linux.html#LINUX_12

It doesn’t mension XF86Config though.

There’s no way to do this without being root? I thought you could get
access to DGA without being root by having access to /dev/mem.
I suppose giving a program random access to /dev/mem is almost as bad
as having it run as root. Couldn’t SDL drop root priviliges as soon as
it got access to /dev/mem using setgid() and setuid()? Except you’d need
to have a uid and gid to change to, I guess? I think a common idiom is
to use setgid(getgid()); setuid(getuid()); for a setuid root program
to drop priviliges but of course that shouldn’t do anything if you’re running
it from a root shell.

I couldn’t find any setuid() or setgid() calls in the 1.2.3 source
so I guess SDL does not do this.

Of course, SDL programs could do it themselves but that’s probably the sort
of platform-dependent stuff people want to avoid by using SDL, no?On Thu, Jan 10, 2002 at 07:00:59PM +0100, Trick wrote:

Also be sure to load ‘dga’ in XF86Config. Then just export
SDL_VIDEODRIVER=dga, be root (or suid stuff), and you’re off! (or, eh, on =)

Maybe an entry for this in the FAQ would be nice ?


Greg V. (hmaon)

hi!

when i read this thread, i was wondering one thing:
why does SDL not ship Mesa as OpenGL implementation?
there are several reason to do it that way:

  • you don’t need to implement and test that stuff (the
    mesa group is responsible for that)

  • it is free. AFAIK, the only one.

  • it’s fast.

  • you had a uniform implementation of opengl.

  • you would be able to write a “native” high lvl api
    around mesa.

perhaps it’s some kind of stupid idea, but i believe
it’d be quite more comfortable for a windows user -
for example - if he only had to grab SDL library to
get the newest opengl implementation, since several
(SDL-)games are written with opengl…

best regards,
Tolga Dalman.

David Olofson wrote (Mittwoch, 9. Januar 2002 23:07):

[…]

Has anyone tried Evas (Enlightenment’s
canevas) for rendering + SDL

for events ? :wink:

No… Does it handle some form of “textures” or
pixmaps, like a

higher level version of the X protocol? If so,
it could actually be

quite interesting - at least if it’s using h/w
accelerated blits

where possible.

Then again, if Evas can do it, so should SDL.
(Again, see my

SDL_NOSHADOW proposal.)

I’ve just discovered it some days ago… For what
I’ve seen, Evas may

use software, classic X11 or OpenGL.
BUT I don’t think it’s as portable as SDL :stuck_out_tongue:

The OpenGL part could be pretty portable…

I’m just waiting for an OpenGL backend in SDL.
Used carefully, il could

rox :slight_smile: Stretched blits, rotations, even simple
horizontal/vertical

flips, etc.

Well, if you want anything extra, apart from the
raw speed, glSDL is
not for you. As I said, I can’t make glSDL do much
fun without extending
the SDL API in ways that can’t be backported to the
other backends
without serious performance hits.

Sure, there could be “hidden” hooks for some
library, like SGE or
whatever, to use instead of softwares
tretching/rotation whenever the
OpenGL backend is used, but still, if you want
serious OpenGL power,
use OpenGL directly.

Hmm … I think I’m gonna write my own
opengl-2D-engine :slight_smile:

Speaking of engines; the one I use in Kobo Deluxe
will eventually get
native OpenGL support. I’m not sure how far I’m
going to take it (it has
to do more than 10 fps without OpenGL! ;-), but
it’ll make better use of
OpenGL than glSDL can do.

(For example, there will be ultra smooth, sub pixel
accurate scrolling of
tiled backgrounds. The engine already makes 8
fraction bits available for
sub pixel accuracy - but glSDL can’t make use of
them.)> On Wednesday 09 January 2002 22:27, allergy wrote:


Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

when i read this thread, i was wondering one thing:
why does SDL not ship Mesa as OpenGL implementation?
there are several reason to do it that way:
[…]

  • it’s fast.

We’re talking about Mesa, right?

  • you had a uniform implementation of opengl.

OpenGL is a standard, you shouldn’t have to worry about that. SDL’s
purpose is just to take care of the non portable parts (GLX vs WGL, etc).

  • you would be able to write a “native” high lvl api
    around mesa.

That is not the purpose of SDL. Reference what the ‘S’ means.

–ryan.

Ryan C. Gordon wrote (Samstag, 12. Januar 2002 00:03)

  • you had a uniform implementation of opengl.

OpenGL is a standard, you shouldn’t have to worry
about that. SDL’s
purpose is just to take care of the non portable
parts (GLX vs WGL, etc).
of course, you’re right…

  • you would be able to write a “native” high lvl
    api

around mesa.

That is not the purpose of SDL. Reference what the
’S’ means.

i know. i thought of a function that queries for
extensions for example.
anyway, why are people talking about opengl blit or
glsdl and similar stuff. somehow i didn’t get the
point :wink:
anyway, as far as i understood the sdl concept (i’m
quite new to sdl), you can use sdl for 2d, sound,
network, event, etc, but if you want to draw 3d stuff,
you have to use the external opengl library.
that’s ok so far. i think there should either be no
interfering sdl-gl calls or a complete opengl
implementation. that was, what i thought.
it’d be great, if someone could explain these
problems.

tnx folks…
Tolga Dalman.__________________________________________________________________

Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

ates x wrote:

i know. i thought of a function that queries for
extensions for example.
anyway, why are people talking about opengl blit or
glsdl and similar stuff. somehow i didn’t get the
point :wink:
anyway, as far as i understood the sdl concept (i’m
quite new to sdl), you can use sdl for 2d, sound,
network, event, etc, but if you want to draw 3d stuff,
you have to use the external opengl library.
that’s ok so far. i think there should either be no
interfering sdl-gl calls or a complete opengl
implementation. that was, what i thought.
it’d be great, if someone could explain these
problems.

It’s really pretty simple. SDL is well designed to implement
most modern 2D games. These games depend on a lot of blits
and that’s what is optimized and accelerated in SDL’s 2D
APIs.

3D on the other hand has much higher, and I do mean MUCH
higher performance requirements. You get the power for
3D mostly from the video card with help from the CPU and
a lot of help from the AGP bus. Getting help from the video
card requires a driver that know how to make the best use
of the video card. The internals of each videocard is
different from every other video card. So you need special
drivers for each video card. The driver usually includes
support for OGL, often it includes a custom implementation
of OGL whose internals are customized for the video card.
So, it makes sense for SDL to use what ever version of OGL
is available on the computer it is running on. Including
Mesa would only give you a version of OGL that is equally
slow on every computer.

Warning: what I just said is a very high level overview
as I see it. YMMV

	Bob Pendleton

hi again!

first of all thank you for your fast reply!


3D on the other hand has much higher, and I do mean
MUCH
higher performance requirements. You get the power >
for
3D mostly from the video card with help from the
CPU and
a lot of help from the AGP bus. Getting help from
the video
card requires a driver that know how to make the
best use
of the video card. The internals of each videocard
is
different from every other video card. So you need
special
drivers for each video card. The driver usually
includes
support for OGL, often it includes a custom
implementation
of OGL whose internals are customized for the video
card.

bob, that’s not 100% correct. the driver adds
extensions and tweaks the implementation.
there’s (afaik) no custom implementation for each
vendor :wink:
anyway, be aware that opengl does nothing have to do
with 3d performance.
and also keep in mind, that opengl (sdl does the same
for 2d) is a unified api for different rendering
hardware.

So, it makes sense for SDL to use what ever version
of OGL
is available on the computer it is running on.

of course, it makes sense. but that’s not the point,
isn’t it?

Including
Mesa would only give you a version of OGL that is
equally
slow on every computer.

that’s not correct either, bob. it depends on the
system (cpu architecture, graphics card, os, …).
nevertheless, i can’t think of an sdl-developer, who
aims to write a game, which runs on a system with
linux recently slower or faster than with windows on
the same system.

Warning: what I just said is a very high level
overview
as I see it. YMMV

  Bob Pendleton

what does YMMV mean?

l8ers,
Tolga.__________________________________________________________________

Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

i know. i thought of a function that queries for
extensions for example.

glString(GL_EXTENSION);

anyway, why are people talking about opengl blit or
glsdl and similar stuff. somehow i didn’t get the
point :wink:

openglblit is a way to render a 2D graphic into a 3D OpenGL context. It is
very slow, and using standard OpenGL calls will yield better results. It
was added to SDL for the Building Architect Tool from SimCity 3000, since
Direct3D has the ability to blit 2D surfaces to 3D space quickly, built
in, and the 'Tool uses it…a solution was needed, and in general terms,
it’s the best you can do with OpenGL…in general terms.

glSDL is something that David is working on, which lets you use OpenGL as
if it was any other 2D backend, which may yield better framerates, but the
person using SDL for 2D graphics doesn’t know that OpenGL is getting the
bits to the screen; she’s just drawing 2D stuff that could be going to any
number of video targets (x11, directx, fbcon, ascii-art-lib, etc.). This
is not part of SDL at this point, but it’s a cool idea.

anyway, as far as i understood the sdl concept (i’m
quite new to sdl), you can use sdl for 2d, sound,
network, event, etc, but if you want to draw 3d stuff,
you have to use the external opengl library.
that’s ok so far. i think there should either be no
interfering sdl-gl calls or a complete opengl
implementation. that was, what i thought.
it’d be great, if someone could explain these
problems.

SDL provides an abstraction interface to 2D graphics that is portable
across platforms. Writing a whole 3D api from scratch is pointless since
OpenGL is freely available. OpenGL should be entirely portable, with one
exception…context creation. The way you create an OpenGL window and
instruct OpenGL that rendering instructions go to a given window is
different for every platform. SDL sets that part up for you in a portable
way, and then steps out of the way to let OpenGL do the rendering (with
minor exceptions like SDL_GL_SwapBuffers(), which is also
platform-dependent).

There isn’t a need to ship a complete OpenGL implementation with SDL,
because of the “ABI” (Application Binary Interface? Something like that)
which says that a compliant OpenGL implementation has certain
characteristics that make it possible to drop in a completely different
implementation without disrupting the rest of the system. That’s why SDL
can use Mesa’s GL-like library, or Nvidia’s (completely separate) OpenGL
implementation without recompiling/changing SDL.

This is true for windows, too, which ships with an OpenGL implementation
and can be overridden by hardware manufacturer’s versions (Nvidia’s
OpenGL, ATI’s version, etc…) that is specific to that 3D hardware.

I believe that BeOS and MacOS (Classic and X) also have either standard
OpenGL implementations installed by default or they are freely
downloadable.

Did that clear anything up?

–ryan.

ates x wrote:

hi again!

first of all thank you for your fast reply!


3D on the other hand has much higher, and I do mean
MUCH
higher performance requirements. You get the power >
for
3D mostly from the video card with help from the
CPU and
a lot of help from the AGP bus. Getting help from
the video
card requires a driver that know how to make the
best use
of the video card. The internals of each videocard
is
different from every other video card. So you need
special
drivers for each video card. The driver usually
includes
support for OGL, often it includes a custom
implementation
of OGL whose internals are customized for the video
card.

bob, that’s not 100% correct. the driver adds
extensions and tweaks the implementation.
there’s (afaik) no custom implementation for each
vendor :wink:

At the driver level it is unique for each vendor with
tweaks for each video card. I spent 5 years writing
high performance drivers for 3D video hardware.

anyway, be aware that opengl does nothing have to do
with 3d performance.

Where did you get that idea? The API has a number of
design features that are specifically designed to make
it possible to implement it efficiently on a lot of
different hardware. The specific implementation for a
specific piece of hardware has a dramatic effect on
performance.

Companies have (SGI, IBM, HP…) have spent hundreds
of man years customizing the internals of their
implementations of OpenGL to get high performance.
They have modified the internals of their OSes. They
have changed the designs of busses and processors.
They have even changed the way their compilers call
subroutines and how their linkers work.

and also keep in mind, that opengl (sdl does the same
for 2d) is a unified api for different rendering
hardware.

Very true. I’m well aware of this.

So, it makes sense for SDL to use what ever version
of OGL
is available on the computer it is running on.

of course, it makes sense. but that’s not the point,
isn’t it?

Including
Mesa would only give you a version of OGL that is
equally
slow on every computer.

that’s not correct either, bob. it depends on the
system (cpu architecture, graphics card, os, …).
nevertheless, i can’t think of an sdl-developer, who
aims to write a game, which runs on a system with
linux recently slower or faster than with windows on
the same system.

I’m sorry, I can’t make sense of this statement. I
suspect you are not a native english speaker and that
you have taken my use to “equally” to mean “exactly
the same” which would make what I said seems pretty
ridiculous. I will try to be more precise in the
future.

What I was trying to say is that the software rendering
based, machine and OS independent, version of Mesa will
perform poorly compared to a hardware accelerated version
of OGL, on any given machine.

Warning: what I just said is a very high level
overview
as I see it. YMMV

          Bob Pendleton

what does YMMV mean?

Your Mileage May Vary. It is a reference to television
commercials for car where they make claims about how
many miles per gallon a car will get. And then they
put in the phrase “Your Mileage May Vary” to keep from
being sued when real people with real cars don’t get
the advertised mileage. Of course, this phrase points
out the fact that the US still uses miles and gallons
when all most all the rest of the world uses kilometers
and liters.>

l8ers,
Tolga.


Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-----------------------------------+

  • Bob Pendleton is seeking contract +
  • and consulting work. Find out more +
  • at http://www.jump.net/~bobp +
    ±-----------------------------------+

hello.

thank you all for clearing my mind up :wink:

bob, indeed english is not my native languange, but
nevertheless, let me state one or two things:
when i said, opengl does not have to do anything with
performance, i wasn’t talking about any implementation
or driver. just the standard (in contrast to directx,
which is an implementation at the same time).
i fairly know the difference between “equally” and
"the same". i don’t know, but doesn’t x + mesa +
hardware-driver remain a mesa-implementation? that was
the reason, why i was confused…

bob (and all the others), i’m very sorry for bothering
you with stupid newbie-questions, and it’d be ok if we
closed this thread now.

thank you all for the time, you spent for answering
:wink:

bye,
Tolga.

ps: i couldn’t know that YMMV thing, right? thanks
anyway…__________________________________________________________________

Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de

ates x wrote:

hello.

thank you all for clearing my mind up :wink:

bob, indeed english is not my native languange, but
nevertheless, let me state one or two things:
when i said, opengl does not have to do anything with
performance, i wasn’t talking about any implementation
or driver. just the standard (in contrast to directx,
which is an implementation at the same time).
i fairly know the difference between “equally” and
"the same". i don’t know, but doesn’t x + mesa +
hardware-driver remain a mesa-implementation? that was
the reason, why i was confused…

We are nearly in agreement here. The OpenGL API was designed
to make it possible to create high performance
implementations. The earliest version of GL that I am aware
of was a FORTRAN API for the Evens&Sutherland Picture System
back in the early to middle '70. (Yes, I was around to see one
but I didn’t ever get to use one.) This API was adopted by
SGI for their original workstations (which makes sense because
the founder of SGI was a grad student of Dr Evens and I believe
he worked for E&S.) It was adapted to C and extended with each
version of the SGI machines but was always designed with
the single goal of providing high perfomance access to SGI
hardware. If you compare OpenGL to failed graphics APIs such
as PHIGS and X own PEX you can see that OpenGL has a much
more hardware oriented API and support for both fast immediate
mode and retained mode rendering. Features that make it
very easy to use and that make it easy to make high performance
implementations.

Your other point. X+Mesa+Hardware driver is a Mesa implementation.
And I see how I confused what you were saying. I was answering a
question you didn’t ask.

I do not like the idea of shipping Mesa with SDL for the simple
fact that over time that will tend to tie SDL to Mesa. There is
no point in shipping Mesa with SDL on Windows becuase there are
perfectly good implementations of OGL for windows. I don’t like
the idea of shipping it with SDL for Linux becuase most versions
of Linux (all the ones I’ve used) come with Mesa. And, anyone
using an NVidia graphics card, like me, will have installed the
NVidia OpenGL libraries which are very different from the Mesa
libraries. I don’t know about the rest of the OSes that SDL
supports. I’ve never worked with BeOS and I haven’t worked with
MacOS for 6+ years and not that much even then. So, why ship it
if everyone already has it?

bob (and all the others), i’m very sorry for bothering
you with stupid newbie-questions, and it’d be ok if we
closed this thread now.

I really don’t mind answering newby questions. If I did I wouldn’t
answer them. :slight_smile:
I just wish I had done a better job with this one.>

thank you all for the time, you spent for answering
:wink:

bye,
Tolga.

ps: i couldn’t know that YMMV thing, right? thanks
anyway…


Gesendet von Yahoo! Mail - http://mail.yahoo.de
Ihre E-Mail noch individueller? - http://domains.yahoo.de


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-----------------------------------+

  • Bob Pendleton is seeking contract +
  • and consulting work. Find out more +
  • at http://www.jump.net/~bobp +
    ±-----------------------------------+