SDL 1.2 and OS X 10.6

Kenneth Bull wrote:

2009/9/22 Bill Kendrick :

(This isn’t direct at you personally. ?It’s entirely Apple’s fault that
backwards compatibility is such a pain in the ass on Macs.)
Hear hear.

Backwards compatibility is a pain in the ass everywhere. ?Imagine how
much simpler newer processors would be if they didn’t need to be
backwards compatible with the 8086. ?There is something to be said for
throwing it all out and starting from scratch with the knowledge we’ve
gained.

Whatever happened to all those processor architectures that were
supposed to replace x86? ?Oh, that’s right, they where replaced by x86
instead.

Compatibility (whether forward, backwards, or sidewards) is the single
most important feature of any computing platform. ?Otherwise you will
force everyone who uses that platform to continually reinvent the wheel.

If the old architecture is really completely broken, here’s what you do:
?- Design the new architecture for total forward compatibility from the
start. ?Don’t repeat the mistakes that forced you to drop the old
architecture.
?- Stick with your new architecture.
?- Create a compatibility layer that works, and maintain it forever.
Apple hasn’t done any of those things.

Maybe so, but in the end, those are unrealistic expectations. You
can’t maintain compatibility with everything forever, as the cost of
maintaining it eventually becomes too great versus the benefits.

Not that I want to specifically defend Apple, as I do agree that they
sometimes drop compatibility too early (OpenCL not supporting most
last gen GPUs comes to mind), but in terms of software engineering in
general you do have to keep in mind that while backward compatibility
is very valuable (I completely agree with you on that), it comes at a
high cost. A company like Apple needs to put limits somewhere if it
wants to move forward.

(Though one has to admit that Apple has done some pretty good work at
maintaining compatibility in some regards. Transitioning your whole
platform from one CPU architecture to another is tricky and Apple
handled it pretty well both times. And their transition to 64-bit is
cleanest and least painful of all major OSes so far.)On Wed, Sep 23, 2009 at 22:31, Rainer Deyke wrote:

On Tue, Sep 22, 2009 at 04:34:55PM -0600, Rainer Deyke wrote:

Simon Roby wrote:

If the old architecture is really completely broken, here’s what you do:

  • Design the new architecture for total forward compatibility from the
    start. Don’t repeat the mistakes that forced you to drop the old
    architecture.
  • Stick with your new architecture.
  • Create a compatibility layer that works, and maintain it forever.
    Apple hasn’t done any of those things.

Maybe so, but in the end, those are unrealistic expectations. You
can’t maintain compatibility with everything forever, as the cost of
maintaining it eventually becomes too great versus the benefits.

Maintenance should be free except for bug fixes. If you emulate API A
on API B, then the compatibility layer will keep working so long as API
B stays stable. In other words, if maintaining compatibility is
problematic, then you’re not doing it right.

Not that I want to specifically defend Apple, as I do agree that they
sometimes drop compatibility too early (OpenCL not supporting most
last gen GPUs comes to mind), but in terms of software engineering in
general you do have to keep in mind that while backward compatibility
is very valuable (I completely agree with you on that), it comes at a
high cost. A company like Apple needs to put limits somewhere if it
wants to move forward.

In the long run, compatibility can be provided by emulating the entire
computer and running the original OS. This is basically a solved
problem. It’s the newer software that’s troublesome. To pick a
non-Apple example, I can still play every game in the Quest for Glory
series except the latest, Quest for Glory V, which requires Windows 95
and does not run under DOSBox.

Even “classic” Mac OS is only 25 years old. I often use software that’s
older than that.> On Wed, Sep 23, 2009 at 22:31, Rainer Deyke wrote:


Rainer Deyke - rainerd at eldwood.com

2009/9/24 Rainer Deyke :

Maintenance should be free except for bug fixes. ?If you emulate API A
on API B, then the compatibility layer will keep working so long as API
B stays stable. ?In other words, if maintaining compatibility is
problematic, then you’re not doing it right.

And then you have API A on API B on API C…

In the long run, compatibility can be provided by emulating the entire
computer and running the original OS. ?This is basically a solved
problem. ?It’s the newer software that’s troublesome. ?To pick a
non-Apple example, I can still play every game in the Quest for Glory
series except the latest, Quest for Glory V, which requires Windows 95
and does not run under DOSBox.

This is hardly efficient, and emulation is not so easy as you seem to
think it is…
Look at Wine for example. It’s been around for years, but only
recently reached 1.0, and even now can run only relatively simple apps
without any special configuration changes.

Even “classic” Mac OS is only 25 years old. ?I often use software that’s
older than that.

How many versions of DOS, Windows, OS/2 and Mac OS have been produced
in those 25 years? Do you really want to emulate all of them? What
would happen if each of those OSs emulated each of those that came
before? Would you want to run a 25 year old program in 25 years worth
of emulators stacked on top of each other?

Kenneth Bull wrote:

2009/9/24 Rainer Deyke :

Maintenance should be free except for bug fixes. If you emulate API A
on API B, then the compatibility layer will keep working so long as API
B stays stable. In other words, if maintaining compatibility is
problematic, then you’re not doing it right.

And then you have API A on API B on API C…

Whoever wrote API B should have either anticipated the needs of API C or
stuck with API A.

However, are multiple layers of emulation really a problem? Assuming
ten years between APIs, anything that uses API A would have to be at
least ten years old when API C comes out. At that point, the efficiency
hit from two layers of emulation no longer matters (since the computer
that runs API C is now orders of magnitude faster than the computer for
which the program was written), and the emulation layer from A to B does
not need to be updated.

This is hardly efficient, and emulation is not so easy as you seem to
think it is…
Look at Wine for example. It’s been around for years, but only
recently reached 1.0, and even now can run only relatively simple apps
without any special configuration changes.

API emulation (which Wine attempts) is hard, especially if the API
you’re emulating is a baroque mess full of undocumented functionality to
which you don’t have the source. Hardware emulation is slow but easy.

How many versions of DOS, Windows, OS/2 and Mac OS have been produced
in those 25 years? Do you really want to emulate all of them? What
would happen if each of those OSs emulated each of those that came
before? Would you want to run a 25 year old program in 25 years worth
of emulators stacked on top of each other?

That’s why I consider emulation a last resort. In order from best to worst:

  • Get the API right from the start so that it does not need to be changed.
  • Add new user-space libraries that work on old implementations of the
    API.
  • Use an extension mechanism built into the API so that new
    applications can detect and use new functionality, but still work on old
    implementations of the API.
  • Extend the API by adding new functionality, but preserve backwards
    compatibility.
  • Write a new API, but keep supporting the old API through an
    emulation layer. Write the new API such that you will never again have
    to make such a big change.–
    Rainer Deyke - rainerd at eldwood.com

I once used a Linux program that emulates MS DOS that ran a program emulating
a Z80 microprocessor running CP/M which in turn ran a program that emulated
an Intel 8051 microprocessor running a program written in 8051 assembly
language.

I thought it was funny as hell, and it was surprisingly fast.

JeffOn Thursday 24 September 2009 19:23, Rainer Deyke wrote:

Kenneth Bull wrote:

2009/9/24 Rainer Deyke :

Maintenance should be free except for bug fixes. If you emulate API A
on API B, then the compatibility layer will keep working so long as API
B stays stable. In other words, if maintaining compatibility is
problematic, then you’re not doing it right.

And then you have API A on API B on API C…

Whoever wrote API B should have either anticipated the needs of API C or
stuck with API A.

However, are multiple layers of emulation really a problem?

2009/9/24 Rainer Deyke :

Whoever wrote API B should have either anticipated the needs of API C or
stuck with API A.

  • Get the API right from the start so that it does not need to be changed.

This is like asking Aristotle to anticipate string theory… It
doesn’t work that way.

However, are multiple layers of emulation really a problem? ?Assuming
ten years between APIs, anything that uses API A would have to be at
least ten years old when API C comes out. ?At that point, the efficiency
hit from two layers of emulation no longer matters (since the computer
that runs API C is now orders of magnitude faster than the computer for
which the program was written), and the emulation layer from A to B does
not need to be updated.

Multiple layers of emulation is a problem, if only because it
increases the number of requirements to run a program. For example,
three layers of emulation over hardware means that you need three
(possibly very large) emulator packages, three operating systems,
three nested virtual drives, three sets of system utilities, three
virtual memory systems, etc. and the program itself, in addition to
your usual operating system and utilities.

However, if most of the software you’re running is designed for the
system you’re running, then emulation is still a better option than
backwards compatibility, since it avoids bogging down modern apps for
the sake of older ones.

API emulation (which Wine attempts) is hard, especially if the API
you’re emulating is a baroque mess full of undocumented functionality to
which you don’t have the source. ?Hardware emulation is slow but easy.

You’re lucky computers are generally digital… I’d hate to have to
emulate antennae effects.

That’s why I consider emulation a last resort. ?In order from best to worst:
?- Add new user-space libraries that work on old implementations of the
API.

Unless the new API offers no new features that couldn’t be implemented
in the old API this isn’t possible.
For example, you can’t add sound over a animation API that doesn’t
already support sound.

?- Use an extension mechanism built into the API so that new
applications can detect and use new functionality, but still work on old
implementations of the API.

  • Extend the API by adding new functionality, but preserve backwards
    compatibility.

The extension mechanism generally adds some overhead itself, but I
agree that an extension mechanism is an important feature (which SDL
unfortunately lacks). However, such mechanisms, in order to avoid
making the overhead any higher than it has to be, usually have a
limited number of ids available for new extensions, and these slots
are quickly used up by third party developers. Even if you reserve a
certain number of slots for yourself, you eventually run out of space
and need another extension mechanism, built as an extension itself,
with its own overhead…

Eventually your API just gets too cumbersome to be practical.

?- Write a new API, but keep supporting the old API through an
emulation layer. ?Write the new API such that you will never again have
to make such a big change.

Again, Aristotle and string theory… you can’t predict everything.
If you could, you’d just write it all at once in one big burst of
innovation and leave it alone for the next thousand years.

Can you really predict everything that might be required/desired in
the next 10 years?

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

-Eric

In the long run, compatibility can be provided by emulating the entire
computer and running the original OS. This is basically a solved
problem. It’s the newer software that’s troublesome. To pick a
non-Apple example, I can still play every game in the Quest for Glory
series except the latest, Quest for Glory V, which requires Windows 95
and does not run under DOSBox.

I cry foul on the Hero’s Quest/Quest For Glory example :stuck_out_tongue: The series
was notoriously buggy, and barely worked right when each game first
shipped. Specifically on the topic of compatibility, not once did
Sierra ever get the character importer to work correctly in the first
version of each game. I still have the patch disks to prove it. (Once
upon a time before the internet, we had to rely on US Mail to get
patch disks.)

In QFG2, I was disappointed. (Though there were so many different
bugs, it wasn’t the only patch disk I was sent.)

QFG3: How could they repeat the mistake again?

QFG4: Come on! What’s wrong with these people?

Actually by QFG5, I was half hoping they would screw it up again, just
to keep their perfect record. (We all knew it was the last game.) They
didn’t disappoint.

-Eric

Hello !

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

Very interesting, thanks.

CU

In the long run, compatibility can be provided by emulating the entire
computer and running the original OS. ?This is basically a solved
problem. ?It’s the newer software that’s troublesome. ?To pick a
non-Apple example, I can still play every game in the Quest for Glory
series except the latest, Quest for Glory V, which requires Windows 95
and does not run under DOSBox.

I cry foul on the Hero’s Quest/Quest For Glory example :stuck_out_tongue: The series
was notoriously buggy, and barely worked right when each game first
shipped. Specifically on the topic of compatibility, not once did
Sierra ever get the character importer to work correctly in the first
version of each game. I still have the patch disks to prove it. (Once
upon a time before the internet, we had to rely on US Mail to get
patch disks.)

In QFG2, I was disappointed. (Though there were so many different
bugs, it wasn’t the only patch disk I was sent.)

QFG3: How could they repeat the mistake again?

QFG4: Come on! What’s wrong with these people?

Still, if you managed to get past the bugs, the games were fantastic.
I see them as some of the most entertaining games I’ve ever played.
Though considering I grew up with King’s Quest and Quest for Glory,
most of my bias can be attributed to nostalgia.On Fri, Oct 2, 2009 at 1:29 AM, E. Wing wrote:

Actually by QFG5, I was half hoping they would screw it up again, just
to keep their perfect record. (We all knew it was the last game.) They
didn’t disappoint.

E. Wing wrote:

I cry foul on the Hero’s Quest/Quest For Glory example :stuck_out_tongue: The series
was notoriously buggy, and barely worked right when each game first
shipped.

While this is true, the fact remains that I can play the first four
games in the series under DOSBox. In fact, they run better in DOSBox
than they ever did under DOS.–
Rainer Deyke - rainerd at eldwood.com

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

Thanks Eric, it has been a very interesting reading.

I agree with you about the choice for 64bit binaries deployment.On Fri, Oct 2, 2009 at 10:15 AM, E. Wing wrote:

Bye, Gabry

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted
here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

It awesome that Mac’s in the such focus of SDL :slight_smile:

But… are there any news about finally getting rid of SDLmain
external dependency problem on Mac?
It is really PIA since SDLmain isn’t even a part of SDL.framework or
SDL source code distribution… or even I can’t find it in SVN, only
just inside Mac’s the developer addons.
I appreciate there’s Xcode template, but since we got SDL.framework
in /Library/Frameworks it is really piece of cake to use SDL inside
generic Xcode projects, with one exception… SDLmain.

I shall ask same question for 1.3. I’ve read somewhere that SDLmain
shouldn’t be anymore necessary for 1.3 but it still is (1.3 won’t link
without it).
Is there anything against just integrating NIB-less main into SDL
library itself (like into SDL_Init’s) ?

Or maybe it is a problem with the framework itself that cannot contain
"main" because it is binded on runtime?

Regards,–
Adam Strzelecki | nanoant.com

Not related, but I just wanted to thank you for looking after the Mac OS
X port of SDL and providing XCode templates. Thanks!On 09-10-02 4:15 AM, E. Wing wrote:

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html


Chris Herborth (@Chris_Herborth) – http://www.pobox.com/~chrish/
Marooned, the survival game! – http://marooned-game.livejournal.com/
Never send a monster to do the work of an evil scientist.

-------------- next part --------------
A non-text attachment was scrubbed…
Name: chrish.vcf
Type: text/x-vcard
Size: 386 bytes
Desc: not available
URL: http://lists.libsdl.org/pipermail/sdl-libsdl.org/attachments/20091002/a0b37596/attachment.vcf

So I don’t think there is much chance of changing this in 1.2.x. The
change will break too much legacy code the other way. Instead of
undefined symbols, people will get multiple defined symbols.

The SDLmain dependency should be removed in SDL 1.3. I can’t reproduce
your problem. Make sure you removed your 1.2 based SDL framework in
*/Library/Frameworks and have only the 1.3 version installed. The new
Xcode templates and updated SDLtests I submitted this week for 1.3
removed all references to SDLmain that I am aware of. So make sure you
all your stuff is up-to-date.

-EricOn 10/2/09, Adam Strzelecki wrote:

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted
here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

It awesome that Mac’s in the such focus of SDL :slight_smile:

But… are there any news about finally getting rid of SDLmain
external dependency problem on Mac?
It is really PIA since SDLmain isn’t even a part of SDL.framework or
SDL source code distribution… or even I can’t find it in SVN, only
just inside Mac’s the developer addons.
I appreciate there’s Xcode template, but since we got SDL.framework
in /Library/Frameworks it is really piece of cake to use SDL inside
generic Xcode projects, with one exception… SDLmain.

I shall ask same question for 1.3. I’ve read somewhere that SDLmain
shouldn’t be anymore necessary for 1.3 but it still is (1.3 won’t link
without it).
Is there anything against just integrating NIB-less main into SDL
library itself (like into SDL_Init’s) ?

Or maybe it is a problem with the framework itself that cannot contain
"main" because it is binded on runtime?

Thanks, I appreciate that.

-EricOn 10/2/09, Chris Herborth wrote:

On 09-10-02 4:15 AM, E. Wing wrote:

Very elaborate (pre)release notes for SDL 1.2.14 / Mac OS X posted here:
http://playcontrol.net/ewing/jibberjabber/big_behind-the-scenes_chang.html

Not related, but I just wanted to thank you for looking after the Mac OS
X port of SDL and providing XCode templates. Thanks!

The SDLmain dependency should be removed in SDL 1.3. I can’t reproduce
your problem. Make sure you removed your 1.2 based SDL framework in
*/Library/Frameworks and have only the 1.3 version installed. The new
Xcode templates and updated SDLtests I submitted this week for 1.3
removed all references to SDLmain that I am aware of. So make sure you
all your stuff is up-to-date.

Brilliant, no more SDLmain on Mac! -framework SDL -framework Cocoa -
framework Carbon -framework OpenGL is indeed enough now.

I think win32 is last baby that cries for SDLmain now. Can we deal
with that?
Would be great if simple -lSDL was enough, but it’s not. We need -
lmingw32 -lSDLmain -lSDL
Moreover sdl-config --cflags on Windows adds -Dmain=SDL_main, yikes!

Once there’s no SDLmain or fancy hacks, it there any need to sdl-config?

Cheers,–
Adam Strzelecki | nanoant.com

Brilliant, no more SDLmain on Mac! -framework SDL -framework Cocoa -
framework Carbon -framework OpenGL is indeed enough now.

So actually, even for SDL 1.2, I don’t think you need Carbon. And
generally on Mac, you only need to explicitly link what you explicitly
use. So you only need OpenGL if you are explicitly using OpenGL. The
SDLMain stuff forced our hand on the -framework Cocoa so now that
SDLMain is gone in 1.3, you won’t need -framework Cocoa any more.

So in the pure SDL 1.3 case, you should only need -framework SDL

(I didn’t remember to remove Cocoa from the SDL 1.3 Xcode Application
Templates. I probably should, but it doesn’t hurt anything and I’m not
terribly motivated to make yet another change to them at the moment.)

I think win32 is last baby that cries for SDLmain now. Can we deal
with that?
Would be great if simple -lSDL was enough, but it’s not. We need -
lmingw32 -lSDLmain -lSDL
Moreover sdl-config --cflags on Windows adds -Dmain=SDL_main, yikes!

Once there’s no SDLmain or fancy hacks, it there any need to sdl-config?

You don’t need sdl-config as it is. You can always use CMake :slight_smile:
http://www.cmake.org

-Eric

Question.

IN my makefiles to compile my SDl project use this directive in gcc --> -SDL -L /usr/local/lib/libSDLmain.a
Nedd I to chage it to only -SDL -L /usr/local/lib/libSDL.a ???

I use SDL 1.3 on MAC OS X 16.1

THanks