SDL 1.3 and rendering API state management

[…]

David, I do this kind of stuff all the time. The SDL API must push
all attributes, set what it needs, do the job, and restore the
attribute and return.

The “opengl” renderer doesn’t in the version I have, at least. It sets
the state up once, and assumes that no one is messing with it after
that.

This way the context stays the same from the
end users point of view.

This has clearly never been the intention. Just like glSDL, the SDL
renderers are not intended to be used together with other code using
the same contexts.

Someone is going to say that pushing and popping the context is to
time consuming, I don’t believe it.

Well, nothing is free, but all other alternatives seem more or less
nonsensical, or plain do not work, so…

The current initialization takes 9 calls or so. No big deal for
"normal" operation, but push, init, render, pop for every single
SDL_Render*() call…? (For comparison; each SDL_RenderCopy() call
generates around 15 OpenGL calls as it is.)

The alternative is to put in a wedge layer to capture OpenGL context
changes (a shadow stack) so that the context can be saved and
restored out side of OpenGL. And I think that is an even worse idea
that having a notification API. :slight_smile:

Yeah, seems like yet another way to “avoid” overhead by introducing
extra logic that costs more than it can ever save. :smiley:

It is better to drop the whole idea than to put in the kind of
notification API you were talking about. Us mere mortals don’t want
to deal with it and will get it wrong all the time. Consider how few
people actually check return codes from C standard library
calls. :slight_smile:

Good point. (It only affects people that hack SDL or use OpenGL and/or
Direct3D directly, and want to use SDL 2D libs over it, but that’s
bad enough.)

So, I guess it’s either hardwiring state saving and restoring into the
SDL renderers, or possibly (if it actually matters) making it
optional by means of an “I want to share the OpenGL/Direct3D context
with the renderer” flag?

If anything this is an argument for having a higher level SLD3D API
that can take care of all of this stuff correctly no matter what the
lower level API is. And, I know how popular that idea is! (Not at
all.)

And, this layer would still have to actually implement a solution
(explicit support for multiple “clients” per context?), which means
in the end, we’d probably have about the same amount of overhead,
only it’s generated by more complex code.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 04:13, Bob Pendleton wrote:

I should get more sleep and not brainstorm so wildly. :smiley:

Some “problems” are just too simple for solutions like that.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 09:30, Ryan C. Gordon wrote:

Yeah… Rename SDL_LoadBMP() as SDL_LoadSurface() (or something),
and convert SDL_image into a plugin that adds support for more
file formats?

And this is where I go “stop, stop, god, please stop!”

I don’t see why this is a bad idea. API’s like GStreamer and gdk-pixbuf
provide flexible plugin interfaces so the core library doesn’t need to
change when someone wants to add some random format. Why must SDL be
monolithic and inflexible? I’d think it’d make package management a
whole lot easier too. It can always be done in a way so that they can be
external modules or built in to the library. I’d think it’d need that
anyway.

David Olofson wrote:> On Monday 04 September 2006 09:30, Ryan C. Gordon wrote:

Yeah… Rename SDL_LoadBMP() as SDL_LoadSurface() (or something),
and convert SDL_image into a plugin that adds support for more
file formats?
And this is where I go “stop, stop, god, please stop!”

I should get more sleep and not brainstorm so wildly. :smiley:

Some “problems” are just too simple for solutions like that.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

I don’t see why this is a bad idea. API’s like GStreamer and
gdk-pixbuf provide flexible plugin interfaces so the core library
doesn’t need to change when someone wants to add some random format.
Why must SDL be monolithic and inflexible?

Well, unfortunately, plugin APIs are about as hard to design as they
are powerful. I’ve been somewhat involved in the design of a few
plugin APIs (audio), and in short, my experience is that designing
one that actually works is h*ll of a lot harder than you’d think
first time you look at some plugin SDK.

Just for starters, you need some type of subsystem to manage plugins.
(Register, query, instantiate, destroy.) Though this doesn’t have to
be rocket science, it’s still a bunch of functions and data
structures that simply don’t exist in as the “SDL style” layered
design.

I’d think it’d make package management a whole lot easier too. It
can always be done in a way so that they can be external modules or
built in to the library. I’d think it’d need that anyway.

I’m not sure exactly what it would simplify, but I have done enough of
it to see what it complicates. :wink:

That said, “need” is a relative thing. Are image loading plugins
important enough to warrant designing and implementing the API and
host side in SDL?

As of now, we have the BMP loader in the SDL core, and then we have
SDL_image, and… well, that’s it, I think. Basically, if you need to
load images, you pull in SDL_image, or you hack your own custom
loader. Or, if you just need a few images, SDL_LoadBMP() might be
sufficient.

Of course, a plugin system with a single image loading API over it
does sound nice from the aesthetic POV - but frankly, does it really
add anything useful?

If you really want a plugin based image loading subsystem for your
applications, you can always implement that as an add-on library,
that comes with a bunch of plugins that wrap SDL_LoadBMP() and
SDL_image. You could even implement it as a snap-in replacement for
SDL_image.

Oh, BTW, unless you’re linking statically, the SDL_image lib already
is a plugin of sorts! If you want to “force” some alien image
format upon a closed source application (which is one of very few
valid reasons to have a plugin system at all!) you can just throw in
a hacked version of SDL_image that supports this image format.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 17:25, Antonio SJ Musumeci wrote:

hardest. Making sure you accommodate for everything and provide the
right hooks. SDL already does this. It just isn’t dynamic. "Simple"
should be to use and maintain… not necessarily to create. Once the
core is done… all the components are completely independent from that
core. No creeping interdependencies. No need to worry about the whole of
the library to modify one section. No needing to wait for bugfixes in
some relatively minor section of the code because it’s not enough to
warrant a full release. For packagers you no longer need to have several
versions of SDL depending on what backends it supports. If I don’t want
ARTS or ESD I don’t need them. If i don’t want XPM or TIFF i don’t need
to have them. If the API for a library changes you can just release a
plugin for that updated version.

Given SDL’s purpose and use… it seems odd that it wouldn’t be designed
to be as flexible, consistent, open, and easily expandable as possible.

David Olofson wrote:From my experience with plugin APIs… it’s the abstraction which is

On Monday 04 September 2006 17:25, Antonio SJ Musumeci wrote:

I don’t see why this is a bad idea. API’s like GStreamer and
gdk-pixbuf provide flexible plugin interfaces so the core library
doesn’t need to change when someone wants to add some random format.
Why must SDL be monolithic and inflexible?

Well, unfortunately, plugin APIs are about as hard to design as they
are powerful. I’ve been somewhat involved in the design of a few
plugin APIs (audio), and in short, my experience is that designing
one that actually works is h*ll of a lot harder than you’d think
first time you look at some plugin SDK.

Just for starters, you need some type of subsystem to manage plugins.
(Register, query, instantiate, destroy.) Though this doesn’t have to
be rocket science, it’s still a bunch of functions and data
structures that simply don’t exist in as the “SDL style” layered
design.

I’d think it’d make package management a whole lot easier too. It
can always be done in a way so that they can be external modules or
built in to the library. I’d think it’d need that anyway.

I’m not sure exactly what it would simplify, but I have done enough of
it to see what it complicates. :wink:

That said, “need” is a relative thing. Are image loading plugins
important enough to warrant designing and implementing the API and
host side in SDL?

As of now, we have the BMP loader in the SDL core, and then we have
SDL_image, and… well, that’s it, I think. Basically, if you need to
load images, you pull in SDL_image, or you hack your own custom
loader. Or, if you just need a few images, SDL_LoadBMP() might be
sufficient.

Of course, a plugin system with a single image loading API over it
does sound nice from the aesthetic POV - but frankly, does it really
add anything useful?

If you really want a plugin based image loading subsystem for your
applications, you can always implement that as an add-on library,
that comes with a bunch of plugins that wrap SDL_LoadBMP() and
SDL_image. You could even implement it as a snap-in replacement for
SDL_image.

Oh, BTW, unless you’re linking statically, the SDL_image lib already
is a plugin of sorts! If you want to “force” some alien image
format upon a closed source application (which is one of very few
valid reasons to have a plugin system at all!) you can just throw in
a hacked version of SDL_image that supports this image format.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

[…]

David, I do this kind of stuff all the time. The SDL API must push
all attributes, set what it needs, do the job, and restore the
attribute and return.

The “opengl” renderer doesn’t in the version I have, at least. It sets
the state up once, and assumes that no one is messing with it after
that.

This way the context stays the same from the
end users point of view.

This has clearly never been the intention. Just like glSDL, the SDL
renderers are not intended to be used together with other code using
the same contexts.

Someone is going to say that pushing and popping the context is to
time consuming, I don’t believe it.

Well, nothing is free, but all other alternatives seem more or less
nonsensical, or plain do not work, so…

The current initialization takes 9 calls or so. No big deal for
"normal" operation, but push, init, render, pop for every single
SDL_Render*() call…? (For comparison; each SDL_RenderCopy() call
generates around 15 OpenGL calls as it is.)

The alternative is to put in a wedge layer to capture OpenGL context
changes (a shadow stack) so that the context can be saved and
restored out side of OpenGL. And I think that is an even worse idea
that having a notification API. :slight_smile:

Yeah, seems like yet another way to “avoid” overhead by introducing
extra logic that costs more than it can ever save. :smiley:

It is better to drop the whole idea than to put in the kind of
notification API you were talking about. Us mere mortals don’t want
to deal with it and will get it wrong all the time. Consider how few
people actually check return codes from C standard library
calls. :slight_smile:

Good point. (It only affects people that hack SDL or use OpenGL and/or
Direct3D directly, and want to use SDL 2D libs over it, but that’s
bad enough.)

So, I guess it’s either hardwiring state saving and restoring into the
SDL renderers, or possibly (if it actually matters) making it
optional by means of an “I want to share the OpenGL/Direct3D context
with the renderer” flag?

If anything this is an argument for having a higher level SLD3D API
that can take care of all of this stuff correctly no matter what the
lower level API is. And, I know how popular that idea is! (Not at
all.)

And, this layer would still have to actually implement a solution
(explicit support for multiple “clients” per context?), which means
in the end, we’d probably have about the same amount of overhead,
only it’s generated by more complex code.

Apologies for not editing this post down, but I wasn’t sure what to edit
out :slight_smile: My suggestion is that the advanced SDL 2d/3d (what ever) should
just cause and error termination if they are called on an OpenGL
surface. There is never any reason to use them on an OpenGL surface.
OTOH No reason they can’t work on software surfaces. But, it is clear
that they will mess up the rendering context on an OpenGL surface. The
alternative is to post a context lost event (which is something Windows
users have wanted for a long time) and do what ever they do. But, that
is asking for problems. Another alternative behavior is to error off
unless the programmer has asked for context lost events. Then you know
the programmer is ready to handle the context problems caused by using
the SDL APIs when they really should not use them.

	Bob PendletonOn Mon, 2006-09-04 at 09:58 +0200, David Olofson wrote:

On Monday 04 September 2006 04:13, Bob Pendleton wrote:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

Looking at SDL_stdinc.h in SDL 1.3… I’m not seeing anything like
stdint.h’s int_fast*_t. I think it’d be nice to provide something like
that. I understand you ccould use “int” but it’d be nice to have data
types consistent within SDL. If you just want an integer but don’t care
the size because you know it’s a within a reasonable range and are more
worried about speed.

hardest. Making sure you accommodate for everything and provide the
right hooks. SDL already does this. It just isn’t dynamic. "Simple"
should be to use and maintain… not necessarily to create. Once the
core is done… all the components are completely independent from that
core. No creeping interdependencies. No need to worry about the whole of
the library to modify one section. No needing to wait for bugfixes in
some relatively minor section of the code because it’s not enough to
warrant a full release. For packagers you no longer need to have several
versions of SDL depending on what backends it supports. If I don’t want
ARTS or ESD I don’t need them. If i don’t want XPM or TIFF i don’t need
to have them. If the API for a library changes you can just release a
plugin for that updated version.

Given SDL’s purpose and use… it seems odd that it wouldn’t be designed
to be as flexible, consistent, open, and easily expandable as possible.

I do not understand the hang up over plugins. There is no need for SDL
to ever support a plugin for any purpose. For image formats all you have
to do is write the simple code to decode an image and set up an SDL
surface structure to point at it in memory. As soon as you do that the
new surface can be used by any part of SDL that understand surfaces.
And, it can be used by any other API that understands SDL’s surface
structure.

You can add all sorts of rendering APIs by simply layering them on top
of the SDL surface structure and blit operations, or you can go down to
the pixel level and do what ever you want to any SDL surface.

Frankly, I think that people are so used to having to jump through hoops
or real fire to add things to monolithic applications that they can not
imagine how easy it is to layer functionality on top of the existing SDL
API.

The same goes for adding higher level functionality to OpenGL surfaces.
You just have to layer the functionality on using OpenGL APIs instead of
using SDL APIs.

There is no need for plugins because SDL is so open that you can just
layer what ever you want on top of SDL.

In the case where you want to support a new low level device, well SDL
has a way to do that to. It is called a driver. Write one for what ever
you want and all SDL functionality become available for that device.

	Bob PendletonOn Mon, 2006-09-04 at 13:59 -0400, Antonio SJ Musumeci wrote:

From my experience with plugin APIs… it’s the abstraction which is

David Olofson wrote:

On Monday 04 September 2006 17:25, Antonio SJ Musumeci wrote:

I don’t see why this is a bad idea. API’s like GStreamer and
gdk-pixbuf provide flexible plugin interfaces so the core library
doesn’t need to change when someone wants to add some random format.
Why must SDL be monolithic and inflexible?

Well, unfortunately, plugin APIs are about as hard to design as they
are powerful. I’ve been somewhat involved in the design of a few
plugin APIs (audio), and in short, my experience is that designing
one that actually works is h*ll of a lot harder than you’d think
first time you look at some plugin SDK.

Just for starters, you need some type of subsystem to manage plugins.
(Register, query, instantiate, destroy.) Though this doesn’t have to
be rocket science, it’s still a bunch of functions and data
structures that simply don’t exist in as the “SDL style” layered
design.

I’d think it’d make package management a whole lot easier too. It
can always be done in a way so that they can be external modules or
built in to the library. I’d think it’d need that anyway.

I’m not sure exactly what it would simplify, but I have done enough of
it to see what it complicates. :wink:

That said, “need” is a relative thing. Are image loading plugins
important enough to warrant designing and implementing the API and
host side in SDL?

As of now, we have the BMP loader in the SDL core, and then we have
SDL_image, and… well, that’s it, I think. Basically, if you need to
load images, you pull in SDL_image, or you hack your own custom
loader. Or, if you just need a few images, SDL_LoadBMP() might be
sufficient.

Of course, a plugin system with a single image loading API over it
does sound nice from the aesthetic POV - but frankly, does it really
add anything useful?

If you really want a plugin based image loading subsystem for your
applications, you can always implement that as an add-on library,
that comes with a bunch of plugins that wrap SDL_LoadBMP() and
SDL_image. You could even implement it as a snap-in replacement for
SDL_image.

Oh, BTW, unless you’re linking statically, the SDL_image lib already
is a plugin of sorts! If you want to “force” some alien image
format upon a closed source application (which is one of very few
valid reasons to have a plugin system at all!) you can just throw in
a hacked version of SDL_image that supports this image format.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

Bob Pendleton wrote:> On Mon, 2006-09-04 at 13:59 -0400, Antonio SJ Musumeci wrote:

From my experience with plugin APIs… it’s the abstraction which is
hardest. Making sure you accommodate for everything and provide the
right hooks. SDL already does this. It just isn’t dynamic. "Simple"
should be to use and maintain… not necessarily to create. Once the
core is done… all the components are completely independent from that
core. No creeping interdependencies. No need to worry about the whole of
the library to modify one section. No needing to wait for bugfixes in
some relatively minor section of the code because it’s not enough to
warrant a full release. For packagers you no longer need to have several
versions of SDL depending on what backends it supports. If I don’t want
ARTS or ESD I don’t need them. If i don’t want XPM or TIFF i don’t need
to have them. If the API for a library changes you can just release a
plugin for that updated version.

Given SDL’s purpose and use… it seems odd that it wouldn’t be designed
to be as flexible, consistent, open, and easily expandable as possible.

I do not understand the hang up over plugins. There is no need for SDL
to ever support a plugin for any purpose. For image formats all you have
to do is write the simple code to decode an image and set up an SDL
surface structure to point at it in memory. As soon as you do that the
new surface can be used by any part of SDL that understand surfaces.
And, it can be used by any other API that understands SDL’s surface
structure.

You can add all sorts of rendering APIs by simply layering them on top
of the SDL surface structure and blit operations, or you can go down to
the pixel level and do what ever you want to any SDL surface.

No, as soon as you want to hardware accelerate the new functionality,
you have to have a least some hook to the internals.
Say, for example the hardware exposes a rotation function that goes
unused by the current SDL code. If you don’t want rotation support in
SDL itself, but in an external library, you have to do it in software.
That’s because SDL doesn’t expose the hardware’s rotation function.

On the other hand, if you have access to internal hooks through some
plugin interface, you’re now able to provide hardware acceleration for
the new functionality. And really, you don’t want things like
rotation/scaling without acceleration, because they’d be pretty useless
then.

Stephane

My thoughts exactly. Xorg, GStreamer, etc don’t provide plugin
interfaces for the developers health. SDL provides cross platform,
video/audio output (and possibly input) API. Hiding away valuable
hardware capabilities is a complete waste of resources. Who cares if you
use an accelerated backend if all you can do is standard blits. Alpha
blending, rotation, scaling (or simple geometry drawling)… those are
all basic things done in just about every game in the last 10 years. Why
must you jump through hoops or reinvent the wheel to use them in SDL?

Stephane Marchesin wrote:> No, as soon as you want to hardware accelerate the new functionality,

you have to have a least some hook to the internals.
Say, for example the hardware exposes a rotation function that goes
unused by the current SDL code. If you don’t want rotation support in
SDL itself, but in an external library, you have to do it in software.
That’s because SDL doesn’t expose the hardware’s rotation function.

On the other hand, if you have access to internal hooks through some
plugin interface, you’re now able to provide hardware acceleration for
the new functionality. And really, you don’t want things like
rotation/scaling without acceleration, because they’d be pretty useless
then.

Stephane


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl

[…]

You can add all sorts of rendering APIs by simply layering them on
top of the SDL surface structure and blit operations, or you can
go down to the pixel level and do what ever you want to any SDL
surface.

No, as soon as you want to hardware accelerate the new
functionality, you have to have a least some hook to the internals.
Say, for example the hardware exposes a rotation function that goes
unused by the current SDL code. If you don’t want rotation support
in SDL itself, but in an external library, you have to do it in
software. That’s because SDL doesn’t expose the hardware’s rotation
function.

Right. This was the first problem I ran into: I can’t use SDL
textures, as I can’t get at the OpenGL texture name or Direct3D
texture pointer.

On the other hand, if you have access to internal hooks through some
plugin interface, you’re now able to provide hardware acceleration
for the new functionality. And really, you don’t want things like
rotation/scaling without acceleration, because they’d be pretty
useless then.

Alternatively, if it’s not realistically possible or too inefficient
to share contexts, there is the option of plugging in a full renderer
implementation using the current (internal) renderer registry API.

As “advanced” rendering features are apparently never going into SDL,
I think that is the safest and most efficient way of handling it.
We’re effectively talking about maintaining a set of extended
versions of the SDL renderers, as a separate project - and I think
that’s easier and more likely to actually work than any other
solution that’s been discussed so far.

To me, it seems like an easy way out for everyone. The interfaces are
already in SDL 1.3; they’re just not public. There are renderers as
well, so creating a new renderer is just a matter of ripping one or
more of the ones in SDL and extending as needed.

So, why bother plugging things in behind the SDL API, rather than
adding layers above SDL the way we are used to?

Well, adding layers doesn’t automatically make add-on libs using the
SDL 2D API work over your display setup. Doesn’t matter if you
implement the full SDL 2D API, as there is no way you can have an
add-on library use it, unless you hack and recompile the add-on
library, or use some compile time magic a la glSDL/wrapper.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 21:07, Stephane Marchesin wrote:

[…]

Apologies for not editing this post down, but I wasn’t sure what to
edit out :slight_smile: My suggestion is that the advanced SDL 2d/3d (what
ever) should just cause and error termination if they are called on
an OpenGL surface. There is never any reason to use them on an
OpenGL surface.

Well, as long as it’s strictly forbidden to use them in GUI toolkits
and other add-on libs. And “basic” SDL 2D is out too, so basically,
if you use OpenGL (or Direct3D), you’re on your own - no SDL add-on
libs will be able to render over your display, unless they happen to
natively support your 3D API(s) of choice.

I believe it’s easier and safer to wire existing libs to a custom
implementation of the SDL 2D, Advanced2D and/or whatever API, than to
port the libs. Also, if you decide to switch to alternative lib or
newer version or something down the road, your porting effort becomes
more or less wasted time, whereas if you had your own SDL *2D
implementation, you could just switch lib and get on with the real
work.

OTOH No reason they can’t work on software surfaces. But, it is
clear that they will mess up the rendering context on an OpenGL
surface.

They won’t do anything beyond your control if their backends call into
your own rendering code. :slight_smile:

The alternative is to post a context lost event (which is
something Windows users have wanted for a long time) and do what
ever they do. But, that is asking for problems. Another alternative
behavior is to error off unless the programmer has asked for context
lost events. Then you know the programmer is ready to handle the
context problems caused by using the SDL APIs when they really
should not use them.

Yeah, I like the general idea of somehow making it hard for developers
to not realize there is a problem to deal with.

The context lost event is effectively the notification system I had in
mind, only much simpler and probably quite sufficient to deal with
the problem.

Not sure what happens if you have more than two clients - the (SDL
core or other) renderer and the application - but OTOH, you should
never need that if things like Advanced2D can implement both the SDL
2D API and it’s own extensions in the same renderer. (Well, maybe if
we end up with more than one rendering extension lib and people want
to mix libs that don’t use the same rendering extension lib, but the
idea with Advanced2D is that it should cover pretty much anything
that isn’t best done with the normal SDL 2D API or directly over a 3D
API. :slight_smile:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 20:03, Bob Pendleton wrote:

Bob Pendleton wrote:

hardest. Making sure you accommodate for everything and provide the
right hooks. SDL already does this. It just isn’t dynamic. "Simple"
should be to use and maintain… not necessarily to create. Once the
core is done… all the components are completely independent from that
core. No creeping interdependencies. No need to worry about the whole of
the library to modify one section. No needing to wait for bugfixes in
some relatively minor section of the code because it’s not enough to
warrant a full release. For packagers you no longer need to have several
versions of SDL depending on what backends it supports. If I don’t want
ARTS or ESD I don’t need them. If i don’t want XPM or TIFF i don’t need
to have them. If the API for a library changes you can just release a
plugin for that updated version.

Given SDL’s purpose and use… it seems odd that it wouldn’t be designed
to be as flexible, consistent, open, and easily expandable as possible.

I do not understand the hang up over plugins. There is no need for SDL
to ever support a plugin for any purpose. For image formats all you have
to do is write the simple code to decode an image and set up an SDL
surface structure to point at it in memory. As soon as you do that the
new surface can be used by any part of SDL that understand surfaces.
And, it can be used by any other API that understands SDL’s surface
structure.

You can add all sorts of rendering APIs by simply layering them on top
of the SDL surface structure and blit operations, or you can go down to
the pixel level and do what ever you want to any SDL surface.

No, as soon as you want to hardware accelerate the new functionality,
you have to have a least some hook to the internals.

I would say that at that point you have to modify the driver level of
SDL. That is what it is for. Having now made that improvement in SDL
everyone can use it.

Say, for example the hardware exposes a rotation function that goes
unused by the current SDL code. If you don’t want rotation support in
SDL itself, but in an external library, you have to do it in software.
That’s because SDL doesn’t expose the hardware’s rotation function.

On the other hand, if you have access to internal hooks through some
plugin interface, you’re now able to provide hardware acceleration for
the new functionality. And really, you don’t want things like
rotation/scaling without acceleration, because they’d be pretty useless
then.

Stephane

The point is that if there is generic functionality that is missing from
SDL then it should be added to SDL. If you want to added driver level
functions, then do so, it is not that hard to do. There is a clear
hardware abstraction layer already in SDL. And I guess that is my point.
This stuff is already there.

SDL 1.2 supports a huge number of devices. I hope that 1.3 will
eventually support just as large a number of devices. I think one of the
reasons that SDL has been able to support so many devices is because so
much device level information is completely hidden inside the device
driver layer of SDL. Just thinking about the API that would be needed to
allow customized device initialization is enough to make me go running
screaming into the night.

	Bob PendletonOn Mon, 2006-09-04 at 21:07 +0200, Stephane Marchesin wrote:

On Mon, 2006-09-04 at 13:59 -0400, Antonio SJ Musumeci wrote:

From my experience with plugin APIs… it’s the abstraction which is


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

[…]

You can add all sorts of rendering APIs by simply layering them on
top of the SDL surface structure and blit operations, or you can
go down to the pixel level and do what ever you want to any SDL
surface.

No, as soon as you want to hardware accelerate the new
functionality, you have to have a least some hook to the internals.
Say, for example the hardware exposes a rotation function that goes
unused by the current SDL code. If you don’t want rotation support
in SDL itself, but in an external library, you have to do it in
software. That’s because SDL doesn’t expose the hardware’s rotation
function.

Right. This was the first problem I ran into: I can’t use SDL
textures, as I can’t get at the OpenGL texture name or Direct3D
texture pointer.

Right there you lost me. SDL doesn’t have textures. SDL has surfaces.
OpenGL has textures, and you get the texture name when you upload the
surface in OpenGL. I do not understand the situation in which SDL would
hide a texture name from you since SDL does not have textures.

Please explain. I clearly do not understand what you are saying here.

	Bob PendletonOn Mon, 2006-09-04 at 22:20 +0200, David Olofson wrote:

On Monday 04 September 2006 21:07, Stephane Marchesin wrote:


±-------------------------------------+

[…]

Apologies for not editing this post down, but I wasn’t sure what to
edit out :slight_smile: My suggestion is that the advanced SDL 2d/3d (what
ever) should just cause and error termination if they are called on
an OpenGL surface. There is never any reason to use them on an
OpenGL surface.

Well, as long as it’s strictly forbidden to use them in GUI toolkits
and other add-on libs. And “basic” SDL 2D is out too, so basically,
if you use OpenGL (or Direct3D), you’re on your own - no SDL add-on
libs will be able to render over your display, unless they happen to
natively support your 3D API(s) of choice.

OK, here I see we have two very different ways of looking at things.
Remember that I have spent several full time months each of the last 2
years working on adding multiple windows to SDL. I got it working the
way I want it on X11. Right now I am just sitting here biting my tongue
hoping SDL 1.3 will not undo all the work I have done.

The right way to do GUI tool kits is to have multiple windows and
windows within windows. The way I implemented it some windows can be
normal SDL 2D windows and some can be 3D OpenGL windows with each OpenGL
window having its own context. This way you can have any mixture of 2D
and 3D on the screen that you want. You can have 3D pop ups over 2D and
2D pop ups over 3D. You can have a 3D window as part of your screen with
a nice tree widget in a 2D window next to the 3D window.

I think that solves the problem you are talking about here. Also, it
makes it possible to build powerful GUI toolkits without putting one
into SDL. And, better yet, I have permission to add this to SDL after
1.3 is released.

	Bob PendletonOn Mon, 2006-09-04 at 22:58 +0200, David Olofson wrote:

On Monday 04 September 2006 20:03, Bob Pendleton wrote:

I believe it’s easier and safer to wire existing libs to a custom
implementation of the SDL 2D, Advanced2D and/or whatever API, than to
port the libs. Also, if you decide to switch to alternative lib or
newer version or something down the road, your porting effort becomes
more or less wasted time, whereas if you had your own SDL *2D
implementation, you could just switch lib and get on with the real
work.

OTOH No reason they can’t work on software surfaces. But, it is
clear that they will mess up the rendering context on an OpenGL
surface.

They won’t do anything beyond your control if their backends call into
your own rendering code. :slight_smile:

The alternative is to post a context lost event (which is
something Windows users have wanted for a long time) and do what
ever they do. But, that is asking for problems. Another alternative
behavior is to error off unless the programmer has asked for context
lost events. Then you know the programmer is ready to handle the
context problems caused by using the SDL APIs when they really
should not use them.

Yeah, I like the general idea of somehow making it hard for developers
to not realize there is a problem to deal with.

The context lost event is effectively the notification system I had in
mind, only much simpler and probably quite sufficient to deal with
the problem.

Not sure what happens if you have more than two clients - the (SDL
core or other) renderer and the application - but OTOH, you should
never need that if things like Advanced2D can implement both the SDL
2D API and it’s own extensions in the same renderer. (Well, maybe if
we end up with more than one rendering extension lib and people want
to mix libs that don’t use the same rendering extension lib, but the
idea with Advanced2D is that it should cover pretty much anything
that isn’t best done with the normal SDL 2D API or directly over a 3D
API. :slight_smile:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

My thoughts exactly. Xorg, GStreamer, etc don’t provide plugin
interfaces for the developers health. SDL provides cross platform,
video/audio output (and possibly input) API. Hiding away valuable
hardware capabilities is a complete waste of resources. Who cares if
you use an accelerated backend if all you can do is standard blits.

Anyone who needs some serious frame rates in higher resolutions? :wink:

Alpha blending, rotation, scaling (or simple geometry drawling)…
those are all basic things done in just about every game in the last
10 years. Why must you jump through hoops or reinvent the wheel to
use them in SDL?

Well, that’s exactly my point.

I’m implementing a rendering API suitable for 2D by modern standards,
that will render over OpenGL, Direct3D and a software rasterizer. I’m
doing this because there is no way, ever, that I’m going to have
Direct3D code in my actual application code if I can help it (it’s a
hack for a single platform that I hardly use), and because I want to
keep commonly used low level rendering code in one place, accessible
through the same API.

Nice and handy and generally useful for all sorts of stuff in the now
so popular gray zone between 2D and 3D? Well, if it won’t be, I won’t
have much use for it either…! :smiley:

Now, I can just hack my in-house rendering lib directly over OpenGL
and Direct3D, and be done with it. This is the standard solution for
SDL 1.2 applications that do 2D over OpenGL. (A few use
glSDL/wrapper. Most don’t support Direct3D.) My rendering lib would
not be able to run over the SDL 1.3/2.0 “opengl” or “d3d” renderers
(so SDL 2D calls won’t work), nor would it be able to use SDL managed
textures. When used, it becomes the only way to access the display,
maybe short of the 3D API it’s currently using for rendering.

Alternatively, I can try to make this lib integrate nicely with SDL,
so people can use it as a rendering API extension for SDL, without
essentially replacing/breaking the standard SDL video subsystem. Same
features, same performance, same everything - except code that uses
the standard SDL rendering API still works.

To me, the difference is minimal, as I’ll have to implement everything
that the SDL 2D renderers do anyway, and then some, so implementing
SDL_RenderFill(), SDL_RenderCopy() etc would be trivial. I might even
save some work by having SDL set up the display and manage the
textures.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 21:29, Antonio SJ Musumeci wrote:

[…]

Right. This was the first problem I ran into: I can’t use SDL
textures, as I can’t get at the OpenGL texture name or Direct3D
texture pointer.

Right there you lost me. SDL doesn’t have textures. SDL has
surfaces.

Sorry, I was talking about SDL 1.3, which is a bit different from the
SDL 1.2 API. Among other things, source surfaces for display
rendering calls have now been replaced with textures, which are
basically a bit more abstract than surfaces.

In short, the SDL 1.3 is designed to work well with modern accelerated
APIs like OpenGL and Direct3D, but if need be, it still supports the
"retro style" tricks you could do with SDL 1.2.On Monday 04 September 2006 23:29, Bob Pendleton wrote:

OpenGL has textures, and you get the texture name when you upload
the surface in OpenGL. I do not understand the situation in which
SDL would hide a texture name from you since SDL does not have
textures.

From testsprites2.c of SDL 1.3:


int
LoadSprite(char *file)
{
int i;
SDL_Surface *temp;

/* Load the sprite image */
temp = SDL_LoadBMP(file);
if (temp == NULL) {
    fprintf(stderr, "Couldn't load %s: %s", file, SDL_GetError());
    return (-1);
}
sprite_w = temp->w;
sprite_h = temp->h;

/* Set transparent pixel as the pixel at (0,0) */
if (temp->format->palette) {
    SDL_SetColorKey(temp, SDL_SRCCOLORKEY,
		*(Uint8 *) temp->pixels);
}

/* Create textures from the image */
for (i = 0; i < state->num_windows; ++i) {
    SDL_SelectRenderer(state->windows[i]);
    sprites[i] =
        SDL_CreateTextureFromSurface(0,
		SDL_TEXTUREACCESS_REMOTE, temp);
    if (!sprites[i]) {
        fprintf(stderr, "Couldn't create texture: %s\n",
		SDL_GetError());
        SDL_FreeSurface(temp);
        return (-1);
    }
}
SDL_FreeSurface(temp);

/* We're ready to roll. :) */
return (0);

}

The problem is that the SDL_TextureID returned from
SDL_CreateTextureFromSurface() is just an internal identifier, and
there is no API to get an OpenGL texture name or a Direct3D texture
interface object pointer from that.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’

[…]

Apologies for not editing this post down, but I wasn’t sure what
to edit out :slight_smile: My suggestion is that the advanced SDL 2d/3d
(what ever) should just cause and error termination if they are
called on an OpenGL surface. There is never any reason to use
them on an OpenGL surface.

Well, as long as it’s strictly forbidden to use them in GUI
toolkits and other add-on libs. And “basic” SDL 2D is out too, so
basically, if you use OpenGL (or Direct3D), you’re on your own -
no SDL add-on libs will be able to render over your display,
unless they happen to natively support your 3D API(s) of choice.

OK, here I see we have two very different ways of looking at things.
Remember that I have spent several full time months each of the last
2 years working on adding multiple windows to SDL. I got it working
the way I want it on X11. Right now I am just sitting here biting my
tongue hoping SDL 1.3 will not undo all the work I have done.

I’m not sure how different that part of SDL 1.3 really is from SDL
1.2. All I know for sure is that 1.3 does support multiple windows.

The right way to do GUI tool kits is to have multiple windows and
windows within windows.

Well, it’s the traditional way, but I’m not sure it’s the right way
for typical fullscreen games and similar multimedia applications.

Indeed, if windows can be shaped, have alpha channels etc, the visible
result can be the same. However, if there are only rectangular opaque
windows, it only really works for very basic GUI stuff. No fancy
non-rectangular widgets, translucent selectors and stuff - but that
seems to be what everyone expects these days.

The way I implemented it some windows can be normal SDL 2D windows
and some can be 3D OpenGL windows with each OpenGL window having its
own context. This way you can have any mixture of 2D and 3D on the
screen that you want. You can have 3D pop ups over 2D and 2D pop ups
over 3D. You can have a 3D window as part of your screen with a nice
tree widget in a 2D window next to the 3D window.

For someone like me, who tends to (ab)use SDL for anything graphics
related, this is exactly what’s needed to finally eliminate the need
for any other graphics solutions whatsoever. :slight_smile:

I think that solves the problem you are talking about here.

It does handle some cases, but considering the current state of widely
available windowing systems, I don’t see how it can reliably support
the kind of “tight” visual integration expected from a modern game
GUI. AFAIK, Mac OS X is the only widely available platform that can
actually do live compositing of windows while serious stuff is going
on. Try making a “live” window translucent on anything else, and see
smooth animation turn slideshow…

Also, it makes it possible to build powerful GUI toolkits without
putting one into SDL. And, better yet, I have permission to add this
to SDL after 1.3 is released.

I really hope that works out properly.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Monday 04 September 2006 23:41, Bob Pendleton wrote:

On Mon, 2006-09-04 at 22:58 +0200, David Olofson wrote:

On Monday 04 September 2006 20:03, Bob Pendleton wrote:

[…]

Right. This was the first problem I ran into: I can’t use SDL
textures, as I can’t get at the OpenGL texture name or Direct3D
texture pointer.

Right there you lost me. SDL doesn’t have textures. SDL has
surfaces.

Sorry, I was talking about SDL 1.3, which is a bit different from the
SDL 1.2 API. Among other things, source surfaces for display
rendering calls have now been replaced with textures, which are
basically a bit more abstract than surfaces.

Oh, Duh…

In short, the SDL 1.3 is designed to work well with modern accelerated
APIs like OpenGL and Direct3D, but if need be, it still supports the
"retro style" tricks you could do with SDL 1.2.

OpenGL has textures, and you get the texture name when you upload
the surface in OpenGL. I do not understand the situation in which
SDL would hide a texture name from you since SDL does not have
textures.

From testsprites2.c of SDL 1.3:


int
LoadSprite(char *file)
{
int i;
SDL_Surface *temp;

/* Load the sprite image */
temp = SDL_LoadBMP(file);
if (temp == NULL) {
    fprintf(stderr, "Couldn't load %s: %s", file, SDL_GetError());
    return (-1);
}
sprite_w = temp->w;
sprite_h = temp->h;

/* Set transparent pixel as the pixel at (0,0) */
if (temp->format->palette) {
    SDL_SetColorKey(temp, SDL_SRCCOLORKEY,
  	*(Uint8 *) temp->pixels);
}

/* Create textures from the image */
for (i = 0; i < state->num_windows; ++i) {
    SDL_SelectRenderer(state->windows[i]);
    sprites[i] =
        SDL_CreateTextureFromSurface(0,
  	SDL_TEXTUREACCESS_REMOTE, temp);
    if (!sprites[i]) {
        fprintf(stderr, "Couldn't create texture: %s\n",
  	SDL_GetError());
        SDL_FreeSurface(temp);
        return (-1);
    }
}
SDL_FreeSurface(temp);

/* We're ready to roll. :) */
return (0);

}

The problem is that the SDL_TextureID returned from
SDL_CreateTextureFromSurface() is just an internal identifier, and
there is no API to get an OpenGL texture name or a Direct3D texture
interface object pointer from that.

That sounds like a minor API bug. You just need an API to let you get at
the texture id that SDL has hidden away somewhere else. Just like the
window id hack that is used to get Windows window ids for people who
want to tweak them directly.

That is easy enough to add and does not add complexity to the over all
design.

	Bob PendletonOn Tue, 2006-09-05 at 00:32 +0200, David Olofson wrote:

On Monday 04 September 2006 23:29, Bob Pendleton wrote:

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --’


SDL mailing list
SDL at libsdl.org
http://www.libsdl.org/mailman/listinfo/sdl


±-------------------------------------+

[…]

The problem is that the SDL_TextureID returned from
SDL_CreateTextureFromSurface() is just an internal identifier, and
there is no API to get an OpenGL texture name or a Direct3D
texture interface object pointer from that.

That sounds like a minor API bug. You just need an API to let you
get at the texture id that SDL has hidden away somewhere else. Just
like the window id hack that is used to get Windows window ids for
people who want to tweak them directly.

That is easy enough to add and does not add complexity to the over
all design.

Yeah, that’s what I’m thinking. There is a minor issue though; OpenGL
uses an integer ID, whereas Direct3D uses a pointer to a texture
interface object. I suppose one could typedef something that will
work for both, on a platform by platform basis.

//David Olofson - Programmer, Composer, Open Source Advocate

.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
’-- http://www.reologica.se - Rheology instrumentation --'On Tuesday 05 September 2006 16:46, Bob Pendleton wrote: