Open GL, Linux, Windows

David’s glSDL wrapper

seems to be Linux only.

I know for a fact that it works on Win32, and I think it works on Mac OS X as well. Should work with any platform that has OpenGL, unless I did something stupid somewhere. (In the next release, though, I’ll even try to have it compile properly without faking the #defines from autoconf… :wink:

Basically, if I don’t say anything about platforms, assume the code is supposed to be portable. :slight_smile:

As I need to create an App that can be used in Windows and in Linux (that is
why I wanted to use SDL) I can’t use it.

Well, you could, but it doesn’t have an official “stretch blit” function, as SDL’s 2D API doesn’t. Using OpenGL directly would be better in your case - and cleaner.

Bus as I need to do the stuff in
Pascal anyway, and as supposedly the only API I need is “stretchblt”, maybe
it can help me learn what I need to do to access OpenGL and I can recreate
the OpenGL access functions in Pascal for both environments. That would mean
dropping SDL (and you get rid of me ). Is this reasonable ?

Why drop SDL to get access to OpenGL? Doesn’t the Delphi port of SDL support setting up OpenGL contexts? (And there most probably are a bunch of OpenGL wrappers for Pascal. Check out the games programming sites.)

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Sun, 16/06/2002 22:55:00 , Michael Schnell wrote:

[…]

Do you think SDL/OpenGL prevents the use of hardware acceleration in
non-fullscreen mode, or can I enable it and live with the shortcomings ?

My experience is that clipping will Just Work™ with OpenGL.

Any properly working OpenGL implementation (except a few ones for old hardware) will accelerate windowed mode while still doing proper clipping.

This is much easier for an OpenGL driver than it is for DirectDraw. The former may implement the clipping in pretty much as it wants (like an extension of the ever present view clipping for example), while the latter is too low level to implement clipping efficiently for anything but blits done by the API/hardware. (Implementing “transparent” clipping for software blits to the frame buffer ought to be lots of fun… :slight_smile:

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Sun, 16/06/2002 22:37:00 , Michael Schnell wrote:

[…fullscreen h/w surfaces…]

You could always submit a patch… I honestly am not
sure Sam would go for the idea of not clipping, as it
breaks the “window paradigm”, but it’s still worth
asking him.

Why not provide it as an extra feature, rather than a change of how windowed mode works? Add an SDL_FASTWINDOW flag or something… (Will just be ignored by older SDL libs, and not requested by old applications, so it shouldn’t break anything. Some new applications might need to check whether or not they got it, though.)

While this is certainly possible, I don’t even
think
that even the Amiga port does this. If you feel
ambitious, I’m sure Sam would welcome a patch to
handle this case…

Do you think SDL/OpenGL prevents the use of
hardware acceleration in
non-fullscreen mode, or can I enable it and live
with the shortcomings ?

I am unsure if the final rendered-3D to 2D window blit
is accelerated under SDL/OpenGL,
[…]

AFAIK, that blit is always performed (and clipped :slight_smile: by the OpenGL driver, regardless of platform, so that shouldn’t be an issue.

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Sun, 16/06/2002 14:21:42 , Loren Osborn <linux_dr at yahoo.com> wrote:

== Urspr?ngliche Mitteilung von sdl at libsdl.org am 16.06.02 23:26

I was refering to 2D SDL video, so OpenGL doesn’t
apply.

Now I’m a bit confused. Is Open GL strictly 3D ?

Well, it does use 3D coords for (almost) everything, but the features you need for mapping 3D to a flat screen is basically a superset of those needed for mapping 2D to a flat screen.

That is, OpenGL is 2D if you use it to render 2D scenes.

As DirectX does provide
stretchblit (hardware supported, if the hardware can do it), I assumed
OpenGL would have same, too.

Sort of. While most 2D APIs, as “stretch blit” works with a source and a target rectangle, OpenGL and Direct3D work with a polygon defined by a number of vetices. Each vertex has a 3D coordinate (which is transformed to 2D through the view matrix), and a 2D texture coordinate.

That is, apart from plain X/Y stretching, you can do shearing, rotation, “free transforms” - and you also have perspective correct “3D” transformation of the image. All with a single and, considering the capabilities, rather simple interface.

I don’t recall if SDL has a streatch-blit: If
not it isn’t hard to write…

I always can write a software version of stretchblit in the user program,
but what I am searching is hardware supported stretchblit. So do you mean it
would be “easy to write an SDL access function to access the stretchblit
provided by the OpenGL driver for the card in question” ?

I’d think that’s next to impossible. You can’t just use OpenGL in any normal SDL 2D mode, and even if you could, most cards would not be able to accelerate it anyway.

In my test (using Jedi_JCL by the Jedi project as a Pascal encapsulation for
the SDL-functions) I do:

if (SDL_BlitSurface(zoom, @src, screen_, @upd) < 0) then
begin
Memo.Lines.Add(Format(‘Blit failed: %s’, [SDL_GetError]));
end;

Thus BlitSurface can do a “stretch” (zoom)

SDL doesn’t stretch with any official call, and AFAIK, doesn’t support h/w accelerated scaling at all. (Well, except possibly for h/w video overlays, if that counts.)

and seems to be the equivalent of stretchblt in DirectX.

Is it accelerated?

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Mon, 17/06/2002 07:44:00 , Michael Schnell wrote:

[…OpenGL 2D or 3D?..]

Yes. If you want to do 2D in OpenGL you must draw with Z=0 and
use textures.

You can do what you want with Z, provided you set up a plain ortho view. (That way, Z coords will affect only the Z-buffer - which you can then used to speed up rendering if there’s lots of overlapping.)

I might had that since version 8.0, the same
applies to DX.

All you need is Direct3D (which has been around for quite some time). The deal with 8.0 is just that it doesn’t have a 2D API at all, so you have to do it with the 3D API.

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Mon, 17/06/2002 10:19:39 , Paulo Pinto wrote:

You mean something like this ?
http://www.kbs.twi.tudelft.nl/People/Staff/J.C.Wojdel/stretchblitgl/
(OK, it’s not the window that is resized and you don’t really grab the
rectangle but it follows the mouse, but the drawing should be exactly what
you need)On Mon, Jun 17, 2002 at 04:17:27PM +0200, CRV?ADER/KY wrote:

(…) I need to have a window showing an hw surface, then resize and antialias it
realtime using hardware accel while I grab one of its borders with the mouse. DirectX does it (I wrote an example
in VB6).


±------------------------------------+
|from: J.C.Wojdel |
| J.C.Wojdel at cs.tudelft.nl |
±------------------------------------+

OpenGL does easily support
stretching or shrinking of textures

How? Can I do that with glSDL?

No, but it would be easy to hack in, if you don’t care about SDL API compatibility. It provides transparent tiling of large surfaces, converting surfaces into textures etc, so that might be a quicker way than hacking OpenGL directly…

(Note that hacks like that are not going into the “real” glSDL before SDL 1.3. glSDL is really meant to provide an accelerated SDL 2D API; not Yet Another OpenGL Wrapper With No or Dog Slow S/W Rendering.)

I need to have a window showing an hw surface, then resize and antialias it
realtime using hardware accel while I grab one of its borders with the mouse. DirectX does it (I wrote an example
in VB6).

If you want filtered/antialiased stretching, you have some more problems. Since OpenGL has texture size restrictions, you need to implement tiling (ie splitting the large image up), and for that to work with filtered scaling, the tile edges will need special handling.

glSDL doesn’t use filtered scaling at all, for the single reason that it’s not possible to do right within the limits of the SDL 2D API. For rendering a tiled background properly, you need more information than that provided by individual SDL_BlitSurface() calls.

Anyway, in your case, it might actually be quite trivial. Since you’re dealing with one large image, all you need to do is to make sure each texture has an outline at least one pixel wide outside the area you’re actually blitting from. (Tile size should be chosen so that if you were to blit the full texture size for each tile, tiles would have an overlap of 2 pixels.) That will ensure that filtering is correct across edges.

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Mon, 17/06/2002 16:17:27 , CRV?ADER/KY wrote:

Ray,

OpenGL does easily support
stretching or shrinking of textures, and it also easily will do 2D-only
applications. I have done three of them myself. OpenGL makes a far far
faster 2D engine than SDL ever could, because even “2D” OpenGL is done
in hardware.

So your suggestion is to leave out SDL and access the very few OpenGL
functions I need directly from my application ? Can you provide me with some
link where I find the API specs, so that I can do this ?

An Advantage of SDL is that I easily can make my application use standard
GUI when OpenGL is not installed and make use of OpenGL when it’s there. I
would have to do this manually when I don’t use SDL, but I think that can be
done, too.

-Michael Schnell, Krefeld, Germany,
mailto:@Michael_Schnell

and seems to be the equivalent of stretchblt in DirectX.

Is it accelerated?

AFAIK, it’s accelerated if the source and destination rectangles are in the
memory of the video-card.

The DirectX drivers usually don’t implement functions that are not
accelerated. You need to look at the “capabilities” of the driver, and use
your own software if the driver does not perform a certain function, because
it’s not implemented in the hardware of the card. So in the DirectX
description you’ll find an “overlay” feature, that lets you stack multiple
2D pictures on top of each other, each with transparent pixels defined to
grant view to the lower level. I did not find a single card/driver that
supports this function, so I needed to create it in my software. It seems
that all drivers support stretchblt.

-Michael Schnell, Krefeld, Germany,
mailto:@Michael_Schnell

Jacek,

Your suggestions sound like a very good starting point !

Thanks,

-Michael Schnell, Krefeld, Germany,
mailto:@Michael_Schnell

The Radeon takes shortcuts in rendering I don’t like (per-poly mipmap for
example.) Despite the fact that it’s supposed to be faster given all
indications, it’s actually SLOWER by more than a good margin. Why? If
you want my opinion, it’s simply that ATI sucks it hard. :wink:

Not all GF4 cards are created equal. GF4 MX cards are basically just GF2
cards with a little added feature here and there. If you’re looking at a
GF4 MX, split the difference and get yourself a GF3. You’ll be happier
with it in the long run. GF4 Ti4200 sells similarly to the high-end GF4
MX card, isn’t that much slower, and has all the bells and whistles of its
big brothers the Ti4400 and Ti4600.On Mon, Jun 17, 2002 at 04:07:46PM +0200, CRV?ADER/KY wrote:

You can buy a new GeForce4 64MB DDR TVout for 180$, and a GeForce2 64MB
SDR TVout for 120$. I’d suggest the GeForce4. If you have much money to
burn out, the best actual video card is the Radeon 8500 (240$+).


Joseph Carter Hey, that’s MY freak show!

If the user points the gun at his foot and pulls the trigger, it
is our job to ensure the bullet gets where it’s supposed to.

-------------- next part --------------
A non-text attachment was scrubbed…
Name: not available
Type: application/pgp-signature
Size: 273 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20020617/d7f9571e/attachment.pgp

Limit the flaming on this mailing list, Joe.On 17-Jun-2002, Joseph Carter wrote:

On Mon, Jun 17, 2002 at 04:07:46PM +0200, CRV?ADER/KY wrote:

You can buy a new GeForce4 64MB DDR TVout for 180$, and a GeForce2 64MB
SDR TVout for 120$. I’d suggest the GeForce4. If you have much money to
burn out, the best actual video card is the Radeon 8500 (240$+).

The Radeon takes shortcuts in rendering I don’t like (per-poly mipmap for
example.) Despite the fact that it’s supposed to be faster given all
indications, it’s actually SLOWER by more than a good margin. Why? If
you want my opinion, it’s simply that ATI sucks it hard. :wink:

Not all GF4 cards are created equal. GF4 MX cards are basically just GF2
cards with a little added feature here and there. If you’re looking at a
GF4 MX, split the difference and get yourself a GF3. You’ll be happier
with it in the long run. GF4 Ti4200 sells similarly to the high-end GF4
MX card, isn’t that much slower, and has all the bells and whistles of its
big brothers the Ti4400 and Ti4600.


Joseph Carter Hey, that’s MY freak show!

If the user points the gun at his foot and pulls the trigger, it
is our job to ensure the bullet gets where it’s supposed to.


Patrick “Diablo-D3” McFarland || unknown at panax.com
"Computer games don’t affect kids; I mean if Pac-Man affected us as kids, we’d
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music." --Kristian Wilson, Nintendo, Inc, 1989

If you want filtered/antialiased stretching, you have some more problems.
Since OpenGL has texture size restrictions, you need to implement tiling (ie
splitting the large image up), and for that to work with filtered scaling,
the tile edges will need special handling.

Unless of course, SDL were altered to allow the SGIS multisampling
extension, which would certainly offer antialiasined downward stretching,
and probably fix most of the problems upward also.

-Thomas

CRV?ADER/KY wrote:

OpenGL does easily support
stretching or shrinking of textures

How? Can I do that with glSDL? I need to have a window showing an hw surface, then resize and antialias it
realtime using hardware accel while I grab one of its borders with the mouse. DirectX does it (I wrote an example
in VB6).

I am not familiar with glSDL.

OpenGL can do what you ask (but these hardware surfaces are totally
different from those in SDL… not sure what you meant). OpenGL scales
textures by mapping them to polygons of varying sizes, simply adjust the
size of the polygons and the textures smoothly scale to match. This
could easiliy be tied to a mouse movement. As far as anti-aliasing
goes, the texture itself will look rather anti-aliased when stretched or
shrunk (this can actually be controlled, but I believe the default does
what you want). It is also possible to anti-alias things directly in
OpenGL, but I haven’t dealt with that recently and cannot remember
specifics relating to textures and polygons.

                                                        -ray skoog

Michael Schnell wrote:

Ray,

OpenGL does easily support
stretching or shrinking of textures, and it also easily will do 2D-only
applications. I have done three of them myself. OpenGL makes a far far
faster 2D engine than SDL ever could, because even “2D” OpenGL is done
in hardware.

So your suggestion is to leave out SDL and access the very few OpenGL
functions I need directly from my application ? Can you provide me with some
link where I find the API specs, so that I can do this ?

An Advantage of SDL is that I easily can make my application use standard
GUI when OpenGL is not installed and make use of OpenGL when it’s there. I
would have to do this manually when I don’t use SDL, but I think that can be
done, too.

-Michael Schnell, Krefeld, Germany,
mailto:mschnell at bschnell.de


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

Michael,

I’m sorry to report that I do not know enough to tell you if you could
drop SDL entirely, or if so, how you could do it. All of my projects
have had other uses for SDL so I have never had the need to experiment
with that sort of thing.

If you are interested in a dual-mode renderer, the OpenGL and SDL combo
works great. I have been doing this at work. My program tries to run
in OpenGL at first, but it times its frame-drawing speed, and if it
takes to long to draw in OpenGL it falls back to software. Has worked
perfectly on every computer and OS I have tried (various Windows, OSX,
Linux, OS9).

Of course the drawback of a dual-mode renderer is that when in SDL mode,
you would need to implement your own scaling functions (unless some SDL
lib covers this). OpenGL scales by means of mapping textures onto
polygons that are “too big” or “too small,” and so doesn’t actually ever
convert the raw textures (this obviously works very well because it is
what 3D games do). I think that implementing a pure software renderer
through SDL, meeting your requirements, that runs at a reasonable speed
would be challenging.

                        -ray skoog

== Urspr?ngliche Mitteilung von sdl at libsdl.org am 18.06.02 05:08

If you are interested in a dual-mode renderer, the OpenGL and SDL combo
works great. I have been doing this at work. My program tries to run
in OpenGL at first, but it times its frame-drawing speed, and if it
takes to long to draw in OpenGL it falls back to software. Has worked
perfectly on every computer and OS I have tried (various Windows, OSX,
Linux, OS9).

It would be great to see a little example.

Of course the drawback of a dual-mode renderer is that when in SDL mode,
you would need to implement your own scaling functions (unless some SDL
lib covers this).

In this thread I showed an example in this thread that uses
"SDL_BlitSurface(zoom, @src, screen_, @upd)" (via JediSDL) and this works
fine in pure GUI mode (Windows and Linux).

So I’d need to create an OpenGL work-alike using “mapping textures onto
polygons” and use same when appropriate. I need to find out how to
initialize OpenGL in a way that it can live in a rectangle in the GUI,
convert the coordinated of the rectangle to OpenGL coordinates, fill the
large texture “polygon” (source rectangle) and how to have OpenGl show a
certain part of the texture rectangle in my viewing rectangle in
appropriately stretched way.

Thanks,

-Michael Schnell, Krefeld, Germany,
mailto:@Michael_Schnell

I’m not entirely sure how that extension works, but I find it hard to believe that it will automagically fix the problem…

If you render a grid of totally independent quads, will the edges be properly antialiased, without the background leaking through around the edges?

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Tue, 18/06/2002 01:19:02 , Thomas Harte wrote:

If you want filtered/antialiased stretching, you have some more problems.
Since OpenGL has texture size restrictions, you need to implement tiling (ie
splitting the large image up), and for that to work with filtered scaling,
the tile edges will need special handling.

Unless of course, SDL were altered to allow the SGIS multisampling
extension, which would certainly offer antialiasined downward stretching,
and probably fix most of the problems upward also.

— David Olofson <david.olofson at reologica.se> wrote:

If you render a grid of totally independent quads,
will the edges be properly antialiased, without the
background leaking through around the edges?

//David

Hmm… I think you could aliviate the problem by
having each of the tiles overlap by 2 pixels: The
bottom tile-edge would be opaque, but the edge-most
pixel would be identical to the pixel beside it (so
that the 2 pixel seam is two identical rows of
pixels), and the top tile edge’s outer-most pixels
would be transparant, while the inner pixels (one
pixel from the “true” edge) would be opaque. Any
streatched pixel on the boarder, should show through
to the 2-pixel seam behind it, and be interpolated
correctly. (I think)…

Comments?

-Loren__________________________________________________
Do You Yahoo!?
Yahoo! - Official partner of 2002 FIFA World Cup

[…]

I think that implementing a pure software renderer
through SDL, meeting your requirements, that runs at a reasonable speed
would be challenging.

It would take some work to come up with an optimized inner loop, but unfortunately, it seems like it would almost be a waste of time, unless it has to run on “vintage” CPUs. Plain, flat scaling is not a major performance issue. Blitting to the screen is. (On all targets, I’d say, but while some are “OK”, others are totally hopeless for anything more than some 300 kB per frame.)

Try filling a screen of the desired resolution and pixel format with random colors using SDL_FillRect(), or a full screen blit from an SDL_DisplayFormat()ed surface. Make sure you flip each frame in using SDL_Flip(). Try on Linux as well as Win32, as there’s generally a big difference!

If that isn’t fast enough for you, OpenGL is your only option.

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Mon, 17/06/2002 19:08:00 , Ray Skoog wrote:

Well, that sounds a bit like what I suggested in the first place - and it doesn’t require alpha blending, or any extensions at all. Just add an outline of pixels from surrounding tiles, adjust the texcords (and tile size - you’ll have to use some_power_of_two-2), and everything will look fine.

BTW, my smoothscroll demo at

http://olofson.net/mixed.html

relies on the same “effect”, although implicitly by using tiles from a single texture. Tile placement on the texture ensures proper “border pixels” for each tile, as long as tiles are combined in “legal” ways.

//David

.---------------------------------------
| David Olofson
| Programmer

david.olofson at reologica.se
Address:
REOLOGICA Instruments AB
Scheelev?gen 30
223 63 LUND
Sweden
---------------------------------------
Phone: 046-12 77 60
Fax: 046-12 50 57
Mobil:
E-mail: david.olofson at reologica.se
WWW: http://www.reologica.se

`-----> We Make Rheology RealOn Tue, 18/06/2002 04:12:52 , Loren Osborn <linux_dr at yahoo.com> wrote:

— David Olofson <david.olofson at reologica.se> wrote:

If you render a grid of totally independent quads,
will the edges be properly antialiased, without the
background leaking through around the edges?

//David

Hmm… I think you could aliviate the problem by
having each of the tiles overlap by 2 pixels: The
bottom tile-edge would be opaque, but the edge-most
pixel would be identical to the pixel beside it (so
that the 2 pixel seam is two identical rows of
pixels), and the top tile edge’s outer-most pixels
would be transparant, while the inner pixels (one
pixel from the “true” edge) would be opaque. Any
streatched pixel on the boarder, should show through
to the 2-pixel seam behind it, and be interpolated
correctly. (I think)…

Comments?