OpenGl and SDL

--------------14715A92375246F2638021FF
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Hi there,

I have a quick question about a non-standard way of using SDL in
conjuction with OpenGL. I am posting here, because someone might have
tried this already.

The normal way of combining 2D and 3D graphics is obviously to use
OpenGL for all 2D stuff (since 2D is a subset of 3D) using quads,
triangle strips, etc.

Now is it possible to do the opposite. Render 3D using OpenGL into
bitmaps but then use SDL’s 2D functions to draw to the screen?

A good example would be a 2D-scroller-game with a 3D character. One
would render the 3D character using OpenGL - nicely lit and textured -
and then blit it into the standard 2D environment.

I am aware of Mesa’s offline rendering option that allows to render into
a framebuffer. I’ve experimented with that already. The problem with
this method is, that all hardware acceleration is turned off and all
OpenGL functions are purely done by software rendering.

So basically, is it possible somehow to use accelerated OpenGL (right on
the card) without a display and get access to the 3D framebuffer for
blits?

Any ideas and comments are appreciated!

Andreas–
| Andreas Schiffler aschiffler at home.com |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

--------------14715A92375246F2638021FF
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public “-//w3c//dtd html 4.0 transitional//en”>

Hi there,

I have a quick question about a non-standard way of using SDL in conjuction with OpenGL. I am posting here, because someone might have tried this already.

The normal way of combining 2D and 3D graphics is obviously to use OpenGL for all 2D stuff (since 2D is a subset of 3D) using quads, triangle strips, etc.

Now is it possible to do the opposite. Render 3D using OpenGL into bitmaps but then use SDL's 2D functions to draw to the screen?

A good example would be a 2D-scroller-game with a 3D character. One would render the 3D character using OpenGL - nicely lit and textured -  and then blit it into the standard 2D environment.

I am aware of Mesa's offline rendering option that allows to render into a framebuffer. I've experimented with that already. The problem with this method is, that all hardware acceleration is turned off and all OpenGL functions are purely done by software rendering.

So basically, is it possible somehow to use accelerated OpenGL (right on the card) without a display and get access to the 3D framebuffer for blits?

Any ideas and comments are appreciated!

Andreas
 

-- 
|  Andreas Schiffler                    aschiffler at home.com  |
|  4707 Eastwood Cres., Niagara Falls, Ont  L2E 1B4, Canada  |
|  +1-905-371-3652 (private)  -  +1-905-371-8834 (work/fax)  |
 

--------------14715A92375246F2638021FF–

Hi there,

I have a quick question about a non-standard way of using SDL in
conjuction with OpenGL. I am posting here, because someone might
have tried this already.

The normal way of combining 2D and 3D graphics is obviously to use
OpenGL for all 2D stuff (since 2D is a subset of 3D) using quads,
triangle strips, etc.

Now is it possible to do the opposite. Render 3D using OpenGL into
bitmaps but then use SDL’s 2D functions to draw to the screen?

The problem with this is that there are cards that can’t even render
to any place where they can blit from; ie they render only to the
frame buffer.

As for rendering to system RAM, it’s probably exception rather than
rule that cards accelerate that, and that’s a major problem if the
next thing you want to do is blit using the CPU… (Reading VRAM is
very, very slow on most cards.)

A good example would be a 2D-scroller-game with a 3D character. One
would render the 3D character using OpenGL - nicely lit and
textured - and then blit it into the standard 2D environment.

I am aware of Mesa’s offline rendering option that allows to render
into a framebuffer. I’ve experimented with that already. The
problem with this method is, that all hardware acceleration is
turned off and all OpenGL functions are purely done by software
rendering.

So basically, is it possible somehow to use accelerated OpenGL
(right on the card) without a display and get access to the 3D
framebuffer for blits?

Why would you want to do that if you have h/w accelerated OpenGL on
the machine…?

The only reason I can see is pixel level effects, but most of those
can be implemented using what the 3D folks call "procedural textures"
and various forms of blending, or by simply do the actual rendering
using triangle strips. The latter is probably the most suitable
method for lower density particle effects, while procedural textures
are nice for video playback and similar stuff.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -'On Saturday 09 December 2000 16:42, Andreas Schiffler wrote:

--------------F9DC1D281269CF53DDEB2E92
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

David,

Why would you want to do that if you have h/w accelerated OpenGL on
the machine…?

Simple. I’d like to accelerate OpenGL graphics on the desktop without
having to run the desktop in 3D mode. I want accelerated 3D or basically
run any 3D stuff without full screen mode.

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.

Man, would I love UMA now! Good old Amiga or SGi’s Onyx …

Regards
Andreas-----

David Olofson wrote:

On Saturday 09 December 2000 16:42, Andreas Schiffler wrote:

Hi there,

I have a quick question about a non-standard way of using SDL in
conjuction with OpenGL. I am posting here, because someone might
have tried this already.

The normal way of combining 2D and 3D graphics is obviously to use
OpenGL for all 2D stuff (since 2D is a subset of 3D) using quads,
triangle strips, etc.

Now is it possible to do the opposite. Render 3D using OpenGL into
bitmaps but then use SDL’s 2D functions to draw to the screen?

The problem with this is that there are cards that can’t even render
to any place where they can blit from; ie they render only to the
frame buffer.

As for rendering to system RAM, it’s probably exception rather than
rule that cards accelerate that, and that’s a major problem if the
next thing you want to do is blit using the CPU… (Reading VRAM is
very, very slow on most cards.)

A good example would be a 2D-scroller-game with a 3D character. One
would render the 3D character using OpenGL - nicely lit and
textured - and then blit it into the standard 2D environment.

I am aware of Mesa’s offline rendering option that allows to render
into a framebuffer. I’ve experimented with that already. The
problem with this method is, that all hardware acceleration is
turned off and all OpenGL functions are purely done by software
rendering.

So basically, is it possible somehow to use accelerated OpenGL
(right on the card) without a display and get access to the 3D
framebuffer for blits?

Why would you want to do that if you have h/w accelerated OpenGL on
the machine…?

The only reason I can see is pixel level effects, but most of those
can be implemented using what the 3D folks call "procedural textures"
and various forms of blending, or by simply do the actual rendering
using triangle strips. The latter is probably the most suitable
method for lower density particle effects, while procedural textures
are nice for video playback and similar stuff.

//David

.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' .- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’


| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

--------------F9DC1D281269CF53DDEB2E92
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public “-//w3c//dtd html 4.0 transitional//en”>

David,
 
Why would you want to do that if you have h/w accelerated OpenGL on
the machine...?
 
Simple. I'd like to accelerate OpenGL graphics on the desktop without having to run the desktop in 3D mode. I want accelerated 3D or basically run any 3D stuff without full screen mode.

It seems from your description, that current graphics cards are just made for one thing: pump data to the screen, flip buffers.

Man, would I love UMA now! Good old Amiga or SGi's Onyx ...

Regards
Andreas

-----

David Olofson wrote:

On Saturday 09 December 2000 16:42, Andreas Schiffler wrote:
> Hi there,
>
> I have a quick question about a non-standard way of using SDL in
> conjuction with OpenGL. I am posting here, because someone might
> have tried this already.
>
> The normal way of combining 2D and 3D graphics is obviously to use
> OpenGL for all 2D stuff (since 2D is a subset of 3D) using quads,
> triangle strips, etc.
>
> Now is it possible to do the opposite. Render 3D using OpenGL into
> bitmaps but then use SDL's 2D functions to draw to the screen?

The problem with this is that there are cards that can't even render
to any place where they can blit from; ie they render only to the
frame buffer.

As for rendering to system RAM, it's probably exception rather than
rule that cards accelerate that, and that's a major problem if the
next thing you want to do is blit using the CPU... (Reading VRAM is
very, very slow on most cards.)

> A good example would be a 2D-scroller-game with a 3D character. One
> would render the 3D character using OpenGL - nicely lit and
> textured - and then blit it into the standard 2D environment.
>
> I am aware of Mesa's offline rendering option that allows to render
> into a framebuffer. I've experimented with that already. The
> problem with this method is, that all hardware acceleration is
> turned off and all OpenGL functions are purely done by software
> rendering.
>
> So basically, is it possible somehow to use accelerated OpenGL
> (right on the card) without a display and get access to the 3D
> framebuffer for blits?

Why would you want to do that if you have h/w accelerated OpenGL on
the machine...?

The only reason I can see is pixel level effects, but most of those
can be implemented using what the 3D folks call "procedural textures"
and various forms of blending, or by simply do the actual rendering
using triangle strips. The latter is probably the most suitable
method for lower density particle effects, while procedural textures
are nice for video playback and similar stuff.

//David

.- M A I A -------------------------------------------------.
|      Multimedia Application Integration Architecture      |
| A Free/Open Source Plugin API for Professional Multimedia |
`----------------------> http://www.linuxaudiodev.com/maia -'
.- David Olofson -------------------------------------------.
| Audio Hacker - Open Source Advocate - Singer - Songwriter |
`--------------------------------------> david at linuxdj.com -'

-- 
|  Andreas Schiffler                    aschiffler at home.com  |
|  Senior Systems Engineer    -    Deskplayer Inc., Buffalo  |
|  4707 Eastwood Cres., Niagara Falls, Ont  L2E 1B4, Canada  |
|  +1-905-371-3652 (private)  -  +1-905-371-8834 (work/fax)  |

 

--------------F9DC1D281269CF53DDEB2E92–

David Olofson asked:

Why would you want to do that if you have h/w accelerated
OpenGL on the machine…?

Andreas Shiffler replied:

Simple. I’d like to accelerate OpenGL graphics on the desktop
without having to run the desktop in 3D mode. I want
accelerated 3D or basically run any 3D stuff without full
screen mode.

It seems from your description, that current graphics cards
are just made for one thing: pump data to the screen, flip
buffers.

Do you mean delegate openGL compositing to the h/w accelerated card,
then integrate the result in 2D, mixing openGL 3D and bitmap 2D
techniques?–
Marc Lavall?e

--------------3DA6169270717122B4EEA7FD
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit

Marc Lavall?e wrote:

David Olofson asked:

Why would you want to do that if you have h/w accelerated
OpenGL on the machine…?

Andreas Shiffler replied:

Simple. I’d like to accelerate OpenGL graphics on the desktop
without having to run the desktop in 3D mode. I want
accelerated 3D or basically run any 3D stuff without full
screen mode.

It seems from your description, that current graphics cards
are just made for one thing: pump data to the screen, flip
buffers.

Do you mean delegate openGL compositing to the h/w accelerated card,
then integrate the result in 2D, mixing openGL 3D and bitmap 2D
techniques?

Exactly that’s what I would like to do!–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

--------------3DA6169270717122B4EEA7FD
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public “-//w3c//dtd html 4.0 transitional//en”>

Marc Lavallée wrote:
David Olofson asked:
>> Why would you want to do that if you have h/w accelerated
>> OpenGL on the machine...?

Andreas Shiffler replied:
> Simple. I'd like to accelerate OpenGL graphics on the desktop
> without having to run the desktop in 3D mode. I want
> accelerated 3D or basically run any 3D stuff without full
> screen mode.
>
> It seems from your description, that current graphics cards
> are just made for one thing: pump data to the screen, flip
> buffers.

Do you mean delegate openGL compositing to the h/w accelerated card,
then integrate the result in 2D, mixing openGL 3D and bitmap 2D
techniques?
 

Exactly that's what I would like to do!
 
-- 
|  Andreas Schiffler                    aschiffler at home.com  |
|  Senior Systems Engineer    -    Deskplayer Inc., Buffalo  |
|  4707 Eastwood Cres., Niagara Falls, Ont  L2E 1B4, Canada  |
|  +1-905-371-3652 (private)  -  +1-905-371-8834 (work/fax)  |
 

--------------3DA6169270717122B4EEA7FD–

--------------DDB68C9080A95F0ED52FF39C
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Andreas Schiffler wrote:

Now is it possible to do the opposite. Render 3D using OpenGL into
bitmaps but then use SDL’s 2D functions to draw to the screen?

To answer my own question, I found this link:
http://www.mesa3d.org/brianp/sig97/offscrn.htm

Rather old though … anoybody has newer info?

Also, I found another good example for using this. Let’s assume you want
to include OpenGL rendered objects in buttons, menus, etc. etc. - you’d
need the bitmaps. There is an example for doing it in the WGL
implementation, here:
http://msdn.microsoft.com/library/techart/msdn_gl6.htm

Cheers
Andreas–
| Andreas Schiffler aschiffler at home.com |
| Senior Systems Engineer - Deskplayer Inc., Buffalo |
| 4707 Eastwood Cres., Niagara Falls, Ont L2E 1B4, Canada |
| +1-905-371-3652 (private) - +1-905-371-8834 (work/fax) |

--------------DDB68C9080A95F0ED52FF39C
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public “-//w3c//dtd html 4.0 transitional//en”>

Andreas Schiffler wrote:
 
Now is it possible to do the opposite. Render 3D using OpenGL into bitmaps but then use SDL's 2D functions to draw to the screen?
 
 
To answer my own question, I found this link:
    http://www.mesa3d.org/brianp/sig97/offscrn.htm

Rather old though ... anoybody has newer info?

Also, I found another good example for using this. Let's assume you want to include OpenGL rendered objects in buttons, menus, etc. etc. - you'd need the bitmaps. There is an example for doing it in the WGL implementation, here:
    http://msdn.microsoft.com/library/techart/msdn_gl6.htm

Cheers
Andreas
 

-- 
|  Andreas Schiffler                    aschiffler at home.com  |
|  Senior Systems Engineer    -    Deskplayer Inc., Buffalo  |
|  4707 Eastwood Cres., Niagara Falls, Ont  L2E 1B4, Canada  |
|  +1-905-371-3652 (private)  -  +1-905-371-8834 (work/fax)  |
 

--------------DDB68C9080A95F0ED52FF39C–

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.

Man, would I love UMA now! Good old Amiga or SGi’s Onyx …

ups, sorry if this is offtopic… but what does UMA mean?–
signed
derethor of centolos

Derethor wrote:

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.
Man, would I love UMA now! Good old Amiga or SGi’s Onyx …
ups, sorry if this is offtopic… but what does UMA mean?

Unified memory architecture. Basically everything (including the
graphics hardware) shares the same physical memory, which generally
makes things fast.

-John–
Underfull \account (badness 10000) has occurred while \spend is active
John R. Hall - Student, Georgia Tech - Contractor, Loki Software

Derethor wrote:

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.

Man, would I love UMA now! Good old Amiga or SGi’s Onyx …

ups, sorry if this is offtopic… but what does UMA mean?

Unified Memory Architecture. Depending on the context it means that either the
GPU and the CPU share the same memory or that the graphics card has the ability
to use any region in memory for the backbuffer.

On consoles UMA is used to get more efficient usage of memory. The X- Box e.g.
will have a unified memory architecture so you don’t have to have your textures
in RAM and on your board but you rather share the memory and tell the GPU where
your geometry and textures reside and the GPU can directly access it without
any copying.

                        Daniel Vogel, Programmer, Loki Software Inc.

Derethor wrote:

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.
Man, would I love UMA now! Good old Amiga or SGi’s Onyx …
ups, sorry if this is offtopic… but what does UMA mean?

Unified memory architecture. Basically everything (including the
graphics hardware) shares the same physical memory, which generally
makes things fast.

Not really offtopic as it is related to various architectures currently or
eventually supported…
-> Microsoft X-box uses UMA
-> PowerPC does in some circumstances
-> yes, 68K systems are UMA

It has it’s weaknesses (such as ummm bandwidth overload problems in the
Xbox for example :slight_smile: but it’s quite nice.

I’d still prefer to see DMA support in linux though :slight_smile:

G’day, eh? :slight_smile:
- TeunisOn Mon, 11 Dec 2000, John R. Hall wrote:

Tue, 12 Dec 2000 John R. Hall wrote:

Derethor wrote:

It seems from your description, that current graphics cards are just
made for one thing: pump data to the screen, flip buffers.
Man, would I love UMA now! Good old Amiga or SGi’s Onyx …
ups, sorry if this is offtopic… but what does UMA mean?

Unified memory architecture. Basically everything (including the
graphics hardware) shares the same physical memory, which generally
makes things fast.

Well, except that every subsystem has to fight for access to the same memory
all the time. There are main boards with built-in video that works like
this, and they generally suck in all respects but the fact that there is
no PCI/AGP bottleneck. Most importantly, they steal memory bandwidth from the
CPU…

BTW, if you have an OCS or ECS Amiga 500, you can try this: Set the screen mode
to 640x400/512 with 16 colors. Now see how much CPU power you have left… (The
CPU is entirely blocked from the bus, except in the video blanking intervals.
Fun, eh? :wink:

Besides, the “trick” is just that everything is using the same bus directly.
That means that you can’t install a video card with faster RAM, and that you
can’t upgrade to faster CPU FSB + faster RAM without replacing the entire
system. It also means that you can’t have some hardware make heavy use of the
bus without impacting performance of the entire system.

No, I’d say the current systems are superior from the design perspective - but
it’s obvious that the hardware and driver implementations are far from
optimal…

//David

…- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' ..- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’

Tue, 12 Dec 2000 winterlion wrote:

It has it’s weaknesses (such as ummm bandwidth overload problems in the
Xbox for example :slight_smile: but it’s quite nice.

heh I was just reading some othe post, thinking "Ok, it probably works fine
as game consoles use lower resolutions and thus don’t need extreme fill rates,"
but it seems that I was overly optimistic…

I’d still prefer to see DMA support in linux though :slight_smile:

Yes, indeed, and that’s the only way to do it, unless we’re going to design our
own hardware. :slight_smile:

//David

…- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' ..- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’

Besides, the “trick” is just that everything is using the same bus directly.
That means that you can’t install a video card with faster RAM, and that you
can’t upgrade to faster CPU FSB + faster RAM without replacing the entire
system. It also means that you can’t have some hardware make heavy use of the
bus without impacting performance of the entire system.

But aren’t you doing that with current systems? If you only change the CPU
or the video card to something say twice as fast, is your CPU performance
or vidoe performance doubled? Even when you change both you still don’t seem
to get the performance benefit. And then after the more you change you have
gone and bought a whole new system…

Cheers,

Bryan

Tue, 12 Dec 2000 Bryan Pope wrote:

Besides, the “trick” is just that everything is using the same bus directly.
That means that you can’t install a video card with faster RAM, and that you
can’t upgrade to faster CPU FSB + faster RAM without replacing the entire
system. It also means that you can’t have some hardware make heavy use of the
bus without impacting performance of the entire system.

But aren’t you doing that with current systems? If you only change the CPU
or the video card to something say twice as fast, is your CPU performance
or vidoe performance doubled? Even when you change both you still don’t seem
to get the performance benefit. And then after the more you change you have
gone and bought a whole new system…

Well, of course, but that’s a different problem; most optimized programs are
designed to distribute the load across the subsystems according to what they
can cope with on the average machine. Subsystems are CPU, GPU, VRAM, sysRAM, HD
etc, and indeed, to speed up all of these, you really need to replace
everything that’s not fast enough, which usually means the entire machine.
(Provided you start out with a well balanced machine, that is.)

What I’m talking about is merely the impact of each single subsystem on all
other subsystems; not how the performance of each subsystem affects the actual
speed of the average, not very scalable application.

For example, on a UMA machine, the 3D accelerator would slow down the CPU,
disk I/O and other things because it needs to use the system bus to read
texture data and write to the frame buffer area. On a normal PC or other
workstation (where the GPU has in’s own RAM for textures and framebuffer; the
VRAM, and in some cases, separate texture RAM), this doesn’t happen, as the
GPU, the VRAM and the RAMDAC have a (very wide and fast) bus all of it’s own,
that doesn’t interfere with the system bus. That’s the advantage of the
workstation style arch.

The disadvantage is that you need a bridge between the system bus and the video
card bus to enable the GPU to DMA sysRAM, and the CPU to access VRAM.
Obviously, the former is very fast and well optimized, especially on AGP cards
(that’s why there is AGP in the first place!), but the latter seems to have been
totally ignored in the quest for faster 3D…

//David

…- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
----------------------> http://www.linuxaudiodev.com/maia -' ..- David Olofson -------------------------------------------. | Audio Hacker - Open Source Advocate - Singer - Songwriter |--------------------------------------> david at linuxdj.com -’