[…]
No no, haven’t you heard of backward’s compatibility? You write a new
API function, and then document it as the proper API function to use,
and document the API functions it replaces as obsolete. The old API
would still be supported, but new applicates shouldn’t be written to
use the old API. They should use the new API.
Of course. You just have to motivate developers to learn and switch to
the new API - which seems to be hard sometimes…
Perhaps. My experience is that a lot of programmers always seem to be
trying to learn new languages and skills. The primary motivation here is
probably getting better jobs/more money. It almost seems much more
complicated already with Unix than Windows, because there’s so much more
available. In Windows, people might know C/C++ and Java. A Unix programmer
however, will probably know those and shell scripting and perl as well. Not
to mention that Emacs or vi also has steep learning curves as well. Seems
like they shouldn’t mind a little learning if they got as far as they have
already. Maybe everyone eventually reaches a point of burnout or something,
though.
Anyone new who doesn’t know the API at all won’t have any problem learning
the new API rather than the old one, and that would help a lot as well. I
still haven’t learned how to do any X programming, so it would all be new to
me. I would prefer a new API, since a new API would have some benefits over
the old. Otherwise, why have a new API? That’s something else for the old
API users to consider too. Ya, you have to learn a new API, but there’s
benefits to doing so as well, so it’s a good thing really.
I think another part of it is the attitude of things must be “supported
by all” or none of them. Why can’t you take advantage of features that
are on some systems but not others? Instead some people seem to like
forcing everyone to the same common denominator.
Problem is that few features can be supported without forcing
applications that use them to take special measures if the features are
not available.
True, but that’s still better than not having it available at all. You can
just leave the choice up to the programmer as to whether they wish to use it
or not. No one is forcing them to, you know. If they decide not to, it’s
the same as if it wasn’t supported at all for them anyway, so where’s the
loss there? I can only see gains. The only cost really is in implementing
the features in the library/API.
As an example, consider adding scaling and rotation to SDL’s blitting.
Sure, glSDL could accelerate it very easilly - but that would be the
only target to do that! Consequently, applications would just have to
stay away from those features unless they’re accelerated. Of course,
there would have to be a reliable way of finding out whether or not
they’re accelerated as well.
I’m not sure that’s a good example of something to support in SDL itself. I
think it’s better in a seperate, more specialized library, and that’s
exactly where it is (glSDL). I’m thinking more applications that are not
trying for cross platform on every possible system in existance (even though
this seems to be rare these days). Maybe I’m only interested Windows and
Linux, using x86 architecture. Given this limited scope, you can rely on a
good many features always being there. 3D accelerated video is another good
example. It didn’t used to be as widespead as it is now, and even now,
there’s lots of people still that don’t have 3D cards, but people still want
to limit their application to requiring that. Should we have not allowed
accelerated 3D support just because not everyone will have that ability?
Even SDL has a little
of this. SDL_GetTicks() could use higher resolution timers if they
exist rather than just holding to 10ms resolution.
Sure, this could be useful for benchmarking, and possibly a few other
things. However, you’d also need an extension to find out what the actual
resolution is, and applications would have to make use of the data in
some way.
Ok. Why’s that a problem?
I think this issue
is the biggest reason why Linux isn’t as good a gaming platform as
Windows is.
Despite my defending the community, I think this is true to some extent.
Still, as to the defense, the main reason why this is so is that
supporting things like direct framebuffer access, triple buffering and so
on require deep, ugly hacks in the current driver archs - or major
redesign. Companies that want to sell new, cool hardware don’t hesitate
to do these kind of things, but Free/Open Source developers will often
not even consider it.
And that’s the problem right there, though. The more it’s avoided, the
worse the situation gets, making it even more unlikely a redesign will occur
to make things clean and simple. Also explains why Linux has such a steep
learning curve to brand new users. Things are only changed around when it
absolutely can’t be avoided any longer a lot of times it seems like. People
don’t realize the price to be paid for that I suppose. Such as Windows
being better for games, being easier to learn and use for beginners (which
is why it continues to be more popular than Linux to the majority), etc.
Anyway, Linux is improving slowly, and that’s good. I still don’t use it
very often, even though I would like to, because it’s still got a little
ways to go yet. But I’m still helping in my own way, by writing my games to
run on both Windows and Linux. Things can only get better with time.
However, I’m qutie sure the biggest reason by far, is that Linux is a
rather small market for games.
Because the foundation isn’t there. It’s starting to be now, thanks to SDL,
DGA, etc. The foundation is where it all has to start. Unfortunately, X is
a lot of the foundation, and it’s still lacking in a lot of ways. Improving
it isn’t going to quickly, for whatever reasons. Could be design flaws,
could be complexity, could be lots of things. I don’t really know, but I do
know that improving X is a big key making the Linux gaming market bigger.
While you can release a “bad” game for Windows and have good marketting
save your ass, there’s no way you can pull that off with a Linux game. If
it’s a great game, that “every Linux gamer will have to buy”, then
maybe you can make some money.
You could have sucky games in Linux as well, once things move ahead far
enough. If we can get to the point where Linux gets as many commercial game
releases and Windows does (maybe platform specific, maybe dual release), I’m
sure you’ll have the same things happening with Linux as Windows as far as
games go, which would include plenty of sucky Linux games that good
marketing manages to save. Linux isn’t very close to that point yet,
though.
Yeah… I don’t know what the XFree86 guys were thinking, but at
least, I try to keep this problem in mind when I’m designing stuff -
even though it does increase the risk of a project dying before the
beta stage.
Oh? Why’s that? Do you find it makes developement take longer or
something?
Well, if you try to keep “everything” in mind in the design stage, it’s
easy to end up with something that’s just too messy and complicated to
implement properly.
I’m not sure I’m convinced of that. I think there’s an art to it. For
example, comparing Direct3D and OpenGL, people will nearly always say that
OpenGL is much easier to use. They both let you accomplish baiscally the
same thing, though. One API is just superior is all. I’ve tried using
DirectX in the past, but never could seem to get the damn palette to set
properly in full screen mode. SDL works easily and beautifully for me, thou
gh. I also think it’s much simpler to use. Once again, a better API.
Design is the easy part perhaps, while making the design simple and elegant
is the hard part. Just another skill to improve I think, like programming
skill, debugging still, documenting skill, optimizing skill, etc.
-Jason
----- Original Message -----
From: david.olofson@reologica.se (David Olofson)
To:
Sent: Wednesday, March 13, 2002 2:09 PM
Subject: Re: [SDL] Mouse wheel in SDL??
On Wednesday 13 March 2002 00:50, Jason Hoffoss wrote: