[OT] Dao language 1.0.1 with SDL and OpenGL bindings

Hi,

Hi.

Dao is a simple yet powerful object-oriented programming language with many
advanced features including, soft (or optional) typing, BNF-like macro
system, regular expression, multi-dimensional numeric array, closure,
coroutine, asynchronous function call for concurrent programming etc. Dao
provides a rich set of standard data types, methods and libraries. Dao is
implemented as a light and efficient virtual machine with very transparent C
programming interfaces, which make it easy to extend Dao with C/C++ or embed
Dao into C/C++ programs.

It looks very interesting! Can you provide a comparison of Dao with Haxe?

http://haxe.org/On Fri, Jun 5, 2009 at 4:15 PM, Limin Fu wrote:


http://codebad.com/

All JIT-optimized programs will soon be faster than precompiled
programs. This has been the case for so long in some technologies that
you may not even realize what’s in them. Do you consider your database
server to be a compiler and JIT optimizer for SQL programs? It
certainly is, and a very sophisticated one in certain ways that your
garden variety JIT optimizer doesn’t even touch.

Another weird bonus from some managed platforms is that you can save a
lot of memory if you don’t have any JIT optimization or other
memory-consuming optimization features. I believe JVM code is usually
a lot smaller than the code generated from it.On Mon, Jun 8, 2009 at 1:37 PM, Mason Wheeler wrote:

I have a slightly different view of that.? Languages are tools, so why use a
defective tool?
That’s why I avoid managed code.


http://codebad.com/

I didn’t know about Haxe. I just went through its web pages quickly.

The first and obvious difference between Dao and Haxe lies at the fact that,
Dao has its own running time platforms and is not targeted to any others,
while Haxe is targeted to other platforms without its own running time
system.

For the language themselves, Dao and Haxe looks similar, in particular, the
typing system. However, into the details they are different, especially the
parts regarding class and method definition, and function parameters etc.
Module/package definitions in Dao and Haxe are also slightly different.

Another obvious difference is on their supports for regular expressions,
where Haxe takes Perl style, and Dao takes Lua style.

There are a number of features of Dao are missing in Haxe, and vice-versa.

2009/6/9 Donny Viszneki <donny.viszneki at gmail.com>> On Fri, Jun 5, 2009 at 4:15 PM, Limin Fu<@Limin_Fu> wrote:

Hi,

Hi.

Dao is a simple yet powerful object-oriented programming language with
many
advanced features including, soft (or optional) typing, BNF-like macro
system, regular expression, multi-dimensional numeric array, closure,
coroutine, asynchronous function call for concurrent programming etc. Dao
provides a rich set of standard data types, methods and libraries. Dao is
implemented as a light and efficient virtual machine with very
transparent C
programming interfaces, which make it easy to extend Dao with C/C++ or
embed
Dao into C/C++ programs.

It looks very interesting! Can you provide a comparison of Dao with Haxe?

http://haxe.org/


http://codebad.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

All JIT-optimized programs will soon be faster than precompiled
programs.

Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do). If the JIT compiler does not take the data into
account, then it can not do better than a compiler that optimises the
code before the first run (in fact worse, since it consumes run-time as
opposed to the traditional compile-time), if it takes the data set into
account then it has to re-run whenever the data set changes and thus
adding a run-time penalty for the re-compilation and optimisation.
Unless the optimisation based on the data set has an additional gain on
the program fragment that is more than the JIT compilation time, you
are not gaining anything.

server to be a compiler and JIT optimizer for SQL programs? It
certainly is, and a very sophisticated one in certain ways that your
garden variety JIT optimizer doesn’t even touch.

Well, let’s take the database server as a JIT compiler. It receives
queries from my SQL app and tries to execute those as fast as it can.
It has to do JIT optimisation because it has no idea what my next
request would be. Had it had knowledge of my entire app, that is, if it
was “compiling” my SQL statement sequence in a precompiled manner, then
it would know a lot more of my access pattern, therefore it 1) would
have much more information about what I was going to do, therefore more
chance to optimise and 2) would not have to prepare for all
possibilities at run time, because it could eliminate all possible SQL
statement sequences that were not in my code. Can you imagine the speed
increase if the SQL engine could be absolute sure that I would never
mutate the database, and that I never use a JOIN? All my app is doing is
SELECT-ing and further SELECT-ing from the set, because it is a simple
query application, with further and further narrowing down the
selection, like the parametric chip selection tool on most
semiconductor manufacturers’ websites (data changes are relatively
infrequent, so you can consider the database as a constant). The entire
business with rollbacks, locks, index merging and whatever else the SQL
engine does when you write a real DB application, like a bank or an
airline reservation system, with loads of JOINs and nested roll-back
capable modifications and locking and whatnot could be thrown out. A
real DB server can’t do that, because, as you pointed out, it does JIT
and therefore can not know what my next action would be. Thereore it
checks for record locks, even though there would never be any, after my
SELECT it prepares all the info it needs to serve my next request would
it not be an other SELECT and so on. So I think pre-compiled SQL would
be faster.

Zoltan

All JIT-optimized programs will soon be faster than precompiled
programs.

Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do). If the JIT compiler does not take the data into
account, then it can not do better than a compiler that optimises the
code before the first run (in fact worse, since it consumes run-time as
opposed to the traditional compile-time), if it takes the data set into
account then it has to re-run whenever the data set changes and thus
adding a run-time penalty for the re-compilation and optimisation.
Unless the optimisation based on the data set has an additional gain on
the program fragment that is more than the JIT compilation time, you
are not gaining anything.

I think what he means is that it a JIT will be able to do little
hardware-specific tweaks like optimizing for the number of registers
and special instructions (SIMD, etc) available on the CPU. Problem
is, in that doesn’t make up for the extra overhead and the design-time
optimization abilities you lose by writing in a managed language.

Ask any experienced gamer and they’ll tell you that artificial intelligence
is never a match for natural stupidity. People can think and reason and
make informed decisions. All a computer can do is match patterns,
and until we get true AI, (I’m thinking computers we can hold a
natural-language conversation with, like on Star Trek,) there will never
be a computer program that can do a better job of managing your code
in the general case than an experienced programmer could do.

Managed programs suffer from several performance issues. In particular,
they’re very hard on the cache. They tend to be dependent on large
external frameworks, which screws up locality of reference for CPU
instruction caches. They also tend to use tracing garbage collectors,
which bloats the memory footprint for bookkeeping (leading to bad
locality of reference for CPU data caches,) slows down performance,
and encourages poor programming leading to memory leakage by
coders who have never learned to be aware of this issue. Teaching
the compiler to use more CPU registers can’t make up for a program
that’s unable to cache properly, especially since it’s considered good
practice these days to keep individual functions as small as possible,
leading to frequent calls and returns between different methods.>----- Original Message ----

From: Zolt?n K?csi
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

And yet with all these problems, more and more game studios are using
managed languages. I wonder why?! :slight_smile:

Most game engines nowadays have the core programmed in heavy optimized C/C++
code and use some
form of managed language to drive them.

Microsoft introduced XNA Studio for hobby developers, but as they already
said in several game developers
conferences, the demand for a professional version was so high that with XNA
Game Studio 3.0, they also
introduced a professional developer program.

C# and Java are increasing their presence in job offers in the gamming
industry.

I know at least of two game studios in Germany (where I live) that are huge
.Net users.

At the end of the day, if the language brings you enough productivity and it
speed it offers for your type of
game is good enough, that is what counts.

For sure a managed language for developing PSP or DS games might not make
sense, just as an example, but
for the 360 it is working quite well.–
Paulo

On Tue, Jun 9, 2009 at 11:06 PM, Mason Wheeler wrote:

Managed programs suffer from several performance issues. In particular,
they’re very hard on the cache. They tend to be dependent on large
external frameworks, which screws up locality of reference for CPU
instruction caches. They also tend to use tracing garbage collectors,
which bloats the memory footprint for bookkeeping (leading to bad
locality of reference for CPU data caches,) slows down performance,
and encourages poor programming leading to memory leakage by
coders who have never learned to be aware of this issue. Teaching
the compiler to use more CPU registers can’t make up for a program
that’s unable to cache properly, especially since it’s considered good
practice these days to keep individual functions as small as possible,
leading to frequent calls and returns between different methods.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

Of course it’ll run on the 360. The Xbox is a horribly over-engineered machine designed by Microsoft, undisputed
king of bloated code, to run software built on a Microsoft framework. The original Xbox had twice the RAM and
about 2.5X as much CPU as the PlayStation 2, for about equal in-game performance. The 360 has twice the
RAM of the PS3 (the CPUs are harder to compare straight-up) again for similar levels of performance.

The thing is, now that miniaturization is starting to hit physical limits and the continual exponential growth
previously afforded by Moore’s Law has come to a screeching halt, (the average desktop ought to be running
on 30 GHz CPUs with 32 GB of RAM by now, but it’s not,) performance is becoming a bigger and bigger issue.
Even Microsoft’s been forced to acknowledge it: After the bloated monstrosity that was Windows Vista was
generally received like the Second Coming of Windows ME, Windows 7 has been slimmed way down and
is actually going to be light enough to run on a netbook.>________________________________

From: Paulo Pinto
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

And yet with all these problems, more and more game studios are using managed languages. I wonder why?! :slight_smile:

Most game engines nowadays have the core programmed in heavy optimized C/C++ code and use some
form of managed language to drive them.

Microsoft introduced XNA Studio for hobby developers, but as they already said in several game developers
conferences, the demand for a professional version was so high that with XNA Game Studio 3.0, they also
introduced a professional developer program.

C# and Java are increasing their presence in job offers in the gamming industry.

I know at least of two game studios in Germany (where I live) that are huge .Net users.

At the end of the day, if the language brings you enough productivity and it speed it offers for your type of
game is good enough, that is what counts.

For sure a managed language for developing PSP or DS games might not make sense, just as an example, but
for the 360 it is working quite well.

Managed codes do have their drawbacks, but they offer a great deal of
flexibility and agility for software development. I think, in the future,
computer systems (at least for common users) will not be without managed
codes. Like it or not, native and managed codes will co-exist.

I agree that the using of managed codes should not be excessive, there must
be a trading point to better use human resources and computing power.

2009/6/10 Mason Wheeler >

Of course it’ll run on the 360. The Xbox is a horribly over-engineered
machine designed by Microsoft, undisputed
king of bloated code, to run software built on a Microsoft framework. The
original Xbox had twice the RAM and
about 2.5X as much CPU as the PlayStation 2, for about equal in-game
performance. The 360 has twice the
RAM of the PS3 (the CPUs are harder to compare straight-up) again for
similar levels of performance.

The thing is, now that miniaturization is starting to hit physical limits
and the continual exponential growth
previously afforded by Moore’s Law has come to a screeching halt, (the
average desktop ought to be running
on 30 GHz CPUs with 32 GB of RAM by now, but it’s not,) performance is
becoming a bigger and bigger issue.
Even Microsoft’s been forced to acknowledge it: After the bloated
monstrosity that was Windows Vista was
generally received like the Second Coming of Windows ME, Windows 7 has been
slimmed way down and
is actually going to be light enough to run on a netbook.


From: Paulo Pinto
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

And yet with all these problems, more and more game studios are using
managed languages. I wonder why?! :slight_smile:

Most game engines nowadays have the core programmed in heavy optimized
C/C++ code and use some
form of managed language to drive them.

Microsoft introduced XNA Studio for hobby developers, but as they already
said in several game developers
conferences, the demand for a professional version was so high that with
XNA Game Studio 3.0, they also
introduced a professional developer program.

C# and Java are increasing their presence in job offers in the gamming
industry.

I know at least of two game studios in Germany (where I live) that are
huge .Net users.

At the end of the day, if the language brings you enough productivity and
it speed it offers for your type of
game is good enough, that is what counts.

For sure a managed language for developing PSP or DS games might not make
sense, just as an example, but
for the 360 it is working quite well.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

The thing is, now that miniaturization is starting to hit physical limits and the continual exponential growth
previously afforded by Moore’s Law has come to a screeching halt, (the average desktop ought to be running
on 30 GHz CPUs with 32 GB of RAM by now, but it’s not,) performance is becoming a bigger and bigger issue.

Just for clarification, Moore’s Law is about number of transistors on
a die, and that’s still going strong and healthy. It’s just that while
those transistors used to translate into more linear performance
(miniaturization allowing higher clocks, and leaving space for more
cache), and now we get more cores instead.On Tue, Jun 9, 2009 at 6:05 PM, Mason Wheeler wrote:


http://pphaneuf.livejournal.com/

Managed codes do have their drawbacks, but they offer a great deal of flexibility and agility for software
development.

I keep hearing this assertion, that managed code makes you more productive because it makes code
easier to write. I’ve gotta call BS on that one, though. Writing code is the least important (and least
time-consuming) part of a programming project. Debugging and maintenance is far more important
over the life of a project, and high levels of abstraction don’t help there, they get in the way. All
managed code makes it easier to do is write bad code very quickly.

(Paulo mentioned that the Xbox 360 uses a lot of managed code. You know what else it has a lot
of? Updates because the release version is full of bugs. I don’t think I ever once saw an NES or
SNES game crash…)>From: Limin Fu

Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

All JIT-optimized programs will soon be faster than precompiled
programs.

Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do).

Actually, some things are the same like 99% of the time. For instance,
a certain local variable might provably be always a value of the
same type. Or it may happen that it is almost always a certain type.
Optimizations can be made at this time. Or some other certain property
of a certain local variable might always be the same. A rather
prolific example of this: hash-table pre-fetch caching. That’s… not
what it’s actually usually called, but the idea is that you already
know the outcome of a certain operation because the values upon which
it depends are not likely or are even guaranteed not to change.
There’s a lot of things going on out there which benefit from these
kinds of optimizations.

If the JIT compiler does not take the data into

server to be a compiler and JIT optimizer for SQL programs? It
certainly is, and a very sophisticated one in certain ways that your
garden variety JIT optimizer doesn’t even touch.

Well, let’s take the database server as a JIT compiler. It receives
queries from my SQL app and tries to execute those as fast as it can.
It has to do JIT optimisation because it has no idea what my next
request would be. Had it had knowledge of my entire app, that is, if it
was “compiling” my SQL statement sequence in a precompiled manner, then
it would know a lot more of my access pattern, therefore it 1) would
have much more information about what I was going to do, therefore more
chance to optimise and 2) would not have to prepare for all
possibilities at run time, because it could eliminate all possible SQL
statement sequences that were not in my code. Can you imagine the speed
increase if the SQL engine could be absolute sure that I would never
mutate the database, and that I never use a JOIN? All my app is doing is
SELECT-ing and further SELECT-ing from the set, because it is a simple
query application, with further and further narrowing down the
selection, like the parametric chip selection tool on most
semiconductor manufacturers’ websites (data changes are relatively
infrequent, so you can consider the database as a constant). The entire
business with rollbacks, locks, index merging and whatever else the SQL
engine does when you write a real DB application, like a bank or an
airline reservation system, with loads of JOINs and nested roll-back
capable modifications and locking and whatnot could be thrown out. A
real DB server can’t do that, because, as you pointed out, it does JIT
and therefore can not know what my next action would be. Thereore it
checks for record locks, even though there would never be any, after my
SELECT it prepares all the info it needs to serve my next request would
it not be an other SELECT and so on. So I think pre-compiled SQL would
be faster.

In practice, none of these things are true because database
applications are never bottlenecked at the CPU, they are bottlenecked
at bandwidth.

You should never assume the database will never be changed because a
mistake might be made.

Does your application only allow one client to access any individual
database at any time? You may want an “embedded” database that does
not make transactions via a client-server model, but rather is just a
library you link directly into your application. If you want to see
something really tubular, check out Metakit!On Tue, Jun 9, 2009 at 5:03 PM, Zolt?n K?csi wrote:


http://codebad.com/

Managed codes do have their drawbacks, but they offer a great deal of flexibility and agility for software
development.

Debugging and maintenance is far more important
over the life of a project, and high levels of abstraction don’t help there, they get in the way.

Utter nonsense. Of course it helps. A lot of programming is purely
incidental tricks of the trade, and doesn’t have a lot to do with how
you want your program to behave, more to do with interfacing with
someone else’s code to communicate to it how you want it to behave.
Higher level languages abstract us further away from these
distractions and allow us to focus on expressing the most salient
design elements of our application. I’ve resisted garbage collection
for a long time, but I’ll tell you what: no more double-frees, no more
null pointer dereferences, and no more uninitialized pointer
dereferences. How does that not make maintenance easier? Have fun
synchronizing those operations in a multi-threaded environment.

?All managed code makes it easier to do is write bad code very quickly.

It also makes writing good code quickly easier.

(Paulo mentioned that the Xbox 360 uses a lot of managed code. ?You know what else it has a lot
of? ?Updates because the release version is full of bugs. ?I don’t think I ever once saw an NES or
SNES game crash…)

I have. There are a lot of bugs in games, too. There are more today
because products contain more code today. Computers can do more shit
than they used to be able to do, and so that’s what we have them do.On Tue, Jun 9, 2009 at 7:14 PM, Mason Wheeler wrote:

From: Limin Fu


http://codebad.com/

I’ve resisted garbage collection for a long time, but I’ll tell you what:
no more double-frees, no more null pointer dereferences, and no more
uninitialized pointer dereferences. How does that not make
maintenance easier?

It’s quite possible to (attempt to) dereference a null poin-- ahem "reference"
in a managed, garbage-collected language. In fact, it’s possible enough that
they often have a special exception for it.

As for the other problems, how were you getting them in the first place?
Double-frees never ever happen if the coder understands the principle of
object ownership, and uninitialized variable use (of any kind) is an absolute
non-issue if you have a halfway decent compiler. (Uninitialized memory
has come up before on here. I believe you were even part of the discussion.)

The only actually serious problem that GC can prevent is memory leaks,
and only for sufficiently low values of “prevent”, and as often as not only
if the programmer is still aware of the way GC works and codes in such
a way as to make their unneeded obejcts collectable.

Have fun synchronizing those operations in a multi-threaded environment.

Umm… I’ve never had any trouble with memory management and
multithreading. It’s pretty simple, actually. Don’t pass objects between
threads when you can avoid it. If you have to, don’t ever do so until they’re
fully created, configured and ready to use. And remember the principle of
ownership. It’s that easy, really.>----- Original Message ----

From: Donny Viszneki <donny.viszneki at gmail.com>
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

Ya’know… I understand why you want to believe what you said, but
the fact is that none of it has been true for about 40 years now. At
least not for more than 99% of programmers.

Funniest thing I ever saw, back in oh… I guess it was '89… I was
working on development tools for SIMD processors used in high
performance graphics workstations. We were having a status meeting
with a research group at a university north of Denver. Our best
microcoder said just what you just said. He had a bit of code that he
had hand optimized and he challenged researchers to show that their
compiler could do as well. Ok, so they coded it up in a C like
language and ran it through their compiler. Took all of ten minutes.
The programmer had spent more than a week on his. The compiled code
was shorter, and faster, and correct. When both pieces of code were
run they got two different answers, turns out the hand coded code was
wrong. The compiled code benefited from the use of an algorithm that
first flipped all the loops, then run unrolled them, then rerolled
them to make use of all available parallelism, and then the
instructions got scanned and repacked both forward and backward.

Saw the same story when I watched a bunch of asm coders forced into
using PLUS (a UNIVAC equivalent of C that looks like Pascal) back in
’79. I’ve seen this over and over and over again.

The simple fact is that the human mind can only keep track of 7 +/1 2
things in short term memory. (why do you think a phone number is 7
digits long?) Any computer can keep track of millions of things. That
means that no matter what your opinion, for the most part, compilers
do a better job of global and local optimization. Show me a programmer
who can use trace data to decide to generate those nasty short
functions in line instead of out of line on a function by function
basis and on the basis of the actual use of the code. Show me a
programmer who can keep track of the probable cache footprint on code
in different source files and optimize access patterns to prevent
cache thrashing.

OTOH, it is possible that no compiler can do as well as you. There
are some astonishingly good programmers in the world. And, you could
well be one of them. (I used to generate 10 to 20 times more bug free
code than the people I worked with, and I know of people who are 10 or
more times more productive than I ever was.) Then consider that if you
are that good, then 98% of the rest of programmers are not good enough
to do as well as a compiler. Then consider that if you used your skill
to learn how to leverage a good optimizing compiler that you could
then be many times better than you currently are.

Bob PendletonOn Tue, Jun 9, 2009 at 4:06 PM, Mason Wheeler wrote:

----- Original Message ----

From: Zolt?n K?csi
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

All JIT-optimized programs will soon be faster than precompiled
programs.

Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do). If the JIT compiler does not take the data into
account, then it can not do better than a compiler that optimises the
code before the first run (in fact worse, since it consumes run-time as
opposed to the traditional compile-time), if it takes the data set into
account then it has to re-run whenever the data set changes and thus
adding a run-time penalty for the re-compilation and optimisation.
Unless the optimisation based on the data set has an additional gain on
the program fragment that is more than the JIT compilation time, you
are not gaining anything.

I think what he means is that it a JIT will be able to do little
hardware-specific tweaks like optimizing for the number of registers
and special instructions (SIMD, etc) available on the CPU. ?Problem
is, in that doesn’t make up for the extra overhead and the design-time
optimization abilities you lose by writing in a managed language.

Ask any experienced gamer and they’ll tell you that artificial intelligence
is never a match for natural stupidity. ?People can think and reason and
make informed decisions. ?All a computer can do is match patterns,
and until we get true AI, (I’m thinking computers we can hold a
natural-language conversation with, like on Star Trek,) there will never
be a computer program that can do a better job of managing your code
in the general case than an experienced programmer could do.

Managed programs suffer from several performance issues. ?In particular,
they’re very hard on the cache. ?They tend to be dependent on large
external frameworks, which screws up locality of reference for CPU
instruction caches. ?They also tend to use tracing garbage collectors,
which bloats the memory footprint for bookkeeping (leading to bad
locality of reference for CPU data caches,) slows down performance,
and encourages poor programming leading to memory leakage by
coders who have never learned to be aware of this issue. ?Teaching
the compiler to use more CPU registers can’t make up for a program
that’s unable to cache properly, especially since it’s considered good
practice these days to keep individual functions as small as possible,
leading to frequent calls and returns between different methods.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------

OTOH, it is possible that no compiler can do as well as you. There
are some astonishingly good programmers in the world. And, you could
well be one of them. (I used to generate 10 to 20 times more bug free
code than the people I worked with, and I know of people who are 10 or
more times more productive than I ever was.) Then consider that if you
are that good, then 98% of the rest of programmers are not good enough
to do as well as a compiler. Then consider that if you used your skill
to learn how to leverage a good optimizing compiler that you could
then be many times better than you currently are.

Does it mean I’m one of those if I honestly don’t get why everyone else
says that stuff like memory management, recursion and multi-threading
are difficult? :stuck_out_tongue:

And I never claimed I’m better than a compiler. I know better than that.
What I said is that no matter how good your compiler is at optimizing
code, it can never make up for bad language design or bad algorithm
design.

I will say this. With a profiler and a natively-compiled language that
will let me code at any level, functional, OO, procedural or even raw
assembly, I will always be able to produce better code than I could
with a compiler that tries to manage all the complexity for you and
doesn’t let you work below certain arbitrary levels of abstraction.>----- Original Message ----

From: Bob Pendleton
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

Get a room you guys :P> ----- Original Message -----

From: Bob Pendleton
To: A list for developers using the SDL library. (includes SDL-announce)
Sent: Tuesday, June 9, 2009 8:36:08 PM
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

Ya’know… I understand why you want to believe what you said, but
the fact is that none of it has been true for about 40 years now. At
least not for more than 99% of programmers.

Funniest thing I ever saw, back in oh… I guess it was '89… I was
working on development tools for SIMD processors used in high
performance graphics workstations. We were having a status meeting
with a research group at a university north of Denver. Our best
microcoder said just what you just said. He had a bit of code that he
had hand optimized and he challenged researchers to show that their
compiler could do as well. Ok, so they coded it up in a C like
language and ran it through their compiler. Took all of ten minutes.
The programmer had spent more than a week on his. The compiled code
was shorter, and faster, and correct. When both pieces of code were
run they got two different answers, turns out the hand coded code was
wrong. The compiled code benefited from the use of an algorithm that
first flipped all the loops, then run unrolled them, then rerolled
them to make use of all available parallelism, and then the
instructions got scanned and repacked both forward and backward.

Saw the same story when I watched a bunch of asm coders forced into
using PLUS (a UNIVAC equivalent of C that looks like Pascal) back in
’79. I’ve seen this over and over and over again.

The simple fact is that the human mind can only keep track of 7 +/1 2
things in short term memory. (why do you think a phone number is 7
digits long?) Any computer can keep track of millions of things. That
means that no matter what your opinion, for the most part, compilers
do a better job of global and local optimization. Show me a programmer
who can use trace data to decide to generate those nasty short
functions in line instead of out of line on a function by function
basis and on the basis of the actual use of the code. Show me a
programmer who can keep track of the probable cache footprint on code
in different source files and optimize access patterns to prevent
cache thrashing.

OTOH, it is possible that no compiler can do as well as you. There
are some astonishingly good programmers in the world. And, you could
well be one of them. (I used to generate 10 to 20 times more bug free
code than the people I worked with, and I know of people who are 10 or
more times more productive than I ever was.) Then consider that if you
are that good, then 98% of the rest of programmers are not good enough
to do as well as a compiler. Then consider that if you used your skill
to learn how to leverage a good optimizing compiler that you could
then be many times better than you currently are.

Bob Pendleton

On Tue, Jun 9, 2009 at 4:06 PM, Mason Wheelerwrote:

----- Original Message ----

From: Zolt?n K?csi
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

All JIT-optimized programs will soon be faster than precompiled
programs.

Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do). If the JIT compiler does not take the data into
account, then it can not do better than a compiler that optimises the
code before the first run (in fact worse, since it consumes run-time as
opposed to the traditional compile-time), if it takes the data set into
account then it has to re-run whenever the data set changes and thus
adding a run-time penalty for the re-compilation and optimisation.
Unless the optimisation based on the data set has an additional gain on
the program fragment that is more than the JIT compilation time, you
are not gaining anything.

I think what he means is that it a JIT will be able to do little
hardware-specific tweaks like optimizing for the number of registers
and special instructions (SIMD, etc) available on the CPU. Problem
is, in that doesn’t make up for the extra overhead and the design-time
optimization abilities you lose by writing in a managed language.

Ask any experienced gamer and they’ll tell you that artificial intelligence
is never a match for natural stupidity. People can think and reason and
make informed decisions. All a computer can do is match patterns,
and until we get true AI, (I’m thinking computers we can hold a
natural-language conversation with, like on Star Trek,) there will never
be a computer program that can do a better job of managing your code
in the general case than an experienced programmer could do.

Managed programs suffer from several performance issues. In particular,
they’re very hard on the cache. They tend to be dependent on large
external frameworks, which screws up locality of reference for CPU
instruction caches. They also tend to use tracing garbage collectors,
which bloats the memory footprint for bookkeeping (leading to bad
locality of reference for CPU data caches,) slows down performance,
and encourages poor programming leading to memory leakage by
coders who have never learned to be aware of this issue. Teaching
the compiler to use more CPU registers can’t make up for a program
that’s unable to cache properly, especially since it’s considered good
practice these days to keep individual functions as small as possible,
leading to frequent calls and returns between different methods.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


±----------------------------------------------------------


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

I resisted responding to this the first time you made that claim, but since
you’ve repeated it, newbies might believe it.

Planning and designing code is the most important and most time-consuming
part of programming. Debugging should occupy no more that 1/3 of total
project time, and typically should be much less.

Excessive time debugging is a clear indication that the code was not well
designed.

Both research and personal experience support this. I once worked on a
software project in which we spent three months designing the code and two
days
debugging it.

If by “writing code” you meant the physical act of typing the lines of code,
then I’ll agree that it is trivial. But to me “writing code” includes the
design phase, which is far more important than any other phase of project
development.

JeffOn Tuesday 09 June 2009 16:14, Mason Wheeler wrote:

Writing code is the least important (and least time-consuming)
part of a programming project.

2009/6/10 Jeff Post <j_post at pacbell.net>

Planning and designing code is the most important and most time-consuming
part of programming. Debugging should occupy no more that 1/3 of total
project time, and typically should be much less.

Excessive time debugging is a clear indication that the code was not well
designed.

You are absolutely right.

Limin

Mason Wheeler wrote:>> ----- Original Message ----

From: Donny Viszneki <donny.viszneki at gmail.com>
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

I’ve resisted garbage collection for a long time, but I’ll tell you what:
no more double-frees, no more null pointer dereferences, and no more
uninitialized pointer dereferences. How does that not make
maintenance easier?

It’s quite possible to (attempt to) dereference a null poin-- ahem "reference"
in a managed, garbage-collected language. In fact, it’s possible enough that
they often have a special exception for it.

As for the other problems, how were you getting them in the first place?
Double-frees never ever happen if the coder understands the principle of
object ownership, and uninitialized variable use (of any kind) is an absolute
non-issue if you have a halfway decent compiler. (Uninitialized memory
has come up before on here. I believe you were even part of the discussion.)

The only actually serious problem that GC can prevent is memory leaks,
and only for sufficiently low values of “prevent”, and as often as not only
if the programmer is still aware of the way GC works and codes in such
a way as to make their unneeded obejcts collectable.

Have fun synchronizing those operations in a multi-threaded environment.

Umm… I’ve never had any trouble with memory management and
multithreading. It’s pretty simple, actually. Don’t pass objects between
threads when you can avoid it. If you have to, don’t ever do so until they’re
fully created, configured and ready to use. And remember the principle of
ownership. It’s that easy, really.

It’s often fun to see wars pro-against the old-way and the new-way of
doing computing. As a professional senior developer in bleeding edge MT
applications in low level languages (C, C++, ASM etc) and as a developer
of so called 4th generation languages (x/harbour, falcon), I work
everyday with both worlds from the inside, and often make them to
co-exist in the same application.

The rants about “this being better than that” are usually led by a basic
ignorance of the underlying topic. For example, guys saying that MT is
"safer" when used with managed languages don’t even imagine what are the
techniques used by those languages to make MT “safe”. And have very
probably never developed an application in (say) Java complex enough to
be possibly deadlock-prone if not well designed.

Overly trust in 4th generation language devices, as i.e. GC, can hurt.
I’ll never forget that Fedora 9 installation program, written in python,
had a severe memory leak that prevented it to be installed on any 256 MB
machine. The leak was due to simply forgetting to remove the references
to unused data. Same can be said for MT devices, whose automatic usage
makes many 4th generation languages actually unsuitable for hard-core
parallelism. But they’re great for GUI, when 100 ops per second are “a lot”.

The long-story-short point is that none is better; they are all tools at
your disposal and they can suit better your needs under certain
situations and not in others. Knowing when to use them, and using them
effectively (read: knowing what you’re doing) is up to you. Putting
excessive trust in them usually means a basic lack of self-instruction
about the internals of the tools you use. Very like trying to build a
skyscraper without computing structural weights and comparing it with
the materials at your disposal. For any non-trivial application, it’s
calling for a “safe disaster”, as in the Fedora installer case.

Whenever I read “alas, we have GC so we can’t fail” sort of commends I
get shivers on my spine, and I pray various deities (different deities
most fitting in different contexts) never to have that sort of guy in my
workgroup.

My 2c
Giancarlo.

Nice way of putting it.

That is why I was saying that languages are just tools, and you have to know
them well enough, to see which
one applies best to a given situation.

Developers should be polyglot. Know different languages and paradigms. These
are our tools.

You don’t see a craftsman using just one type of hammer or one type of
screwdriver for everything, do you?–
Paulo

On Wed, Jun 10, 2009 at 10:13 AM, Giancarlo Niccolai wrote:

Mason Wheeler wrote:

----- Original Message ----

From: Donny Viszneki <donny.viszneki at gmail.com>
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings

I’ve resisted garbage collection for a long time, but I’ll tell you what:
no more double-frees, no more null pointer dereferences, and no more
uninitialized pointer dereferences. How does that not make
maintenance easier?

It’s quite possible to (attempt to) dereference a null poin-- ahem
"reference"
in a managed, garbage-collected language. In fact, it’s possible enough
that
they often have a special exception for it.

As for the other problems, how were you getting them in the first place?
Double-frees never ever happen if the coder understands the principle of
object ownership, and uninitialized variable use (of any kind) is an
absolute
non-issue if you have a halfway decent compiler. (Uninitialized memory
has come up before on here. I believe you were even part of the
discussion.)

The only actually serious problem that GC can prevent is memory leaks,
and only for sufficiently low values of “prevent”, and as often as not
only
if the programmer is still aware of the way GC works and codes in such
a way as to make their unneeded obejcts collectable.

Have fun synchronizing those operations in a multi-threaded environment.

Umm… I’ve never had any trouble with memory management and
multithreading. It’s pretty simple, actually. Don’t pass objects between
threads when you can avoid it. If you have to, don’t ever do so until
they’re
fully created, configured and ready to use. And remember the principle of
ownership. It’s that easy, really.

It’s often fun to see wars pro-against the old-way and the new-way of doing
computing. As a professional senior developer in bleeding edge MT
applications in low level languages (C, C++, ASM etc) and as a developer of
so called 4th generation languages (x/harbour, falcon), I work everyday with
both worlds from the inside, and often make them to co-exist in the same
application.

The rants about “this being better than that” are usually led by a basic
ignorance of the underlying topic. For example, guys saying that MT is
"safer" when used with managed languages don’t even imagine what are the
techniques used by those languages to make MT “safe”. And have very probably
never developed an application in (say) Java complex enough to be possibly
deadlock-prone if not well designed.

Overly trust in 4th generation language devices, as i.e. GC, can hurt. I’ll
never forget that Fedora 9 installation program, written in python, had a
severe memory leak that prevented it to be installed on any 256 MB machine.
The leak was due to simply forgetting to remove the references to unused
data. Same can be said for MT devices, whose automatic usage makes many 4th
generation languages actually unsuitable for hard-core parallelism. But
they’re great for GUI, when 100 ops per second are “a lot”.

The long-story-short point is that none is better; they are all tools at
your disposal and they can suit better your needs under certain situations
and not in others. Knowing when to use them, and using them effectively
(read: knowing what you’re doing) is up to you. Putting excessive trust in
them usually means a basic lack of self-instruction about the internals of
the tools you use. Very like trying to build a skyscraper without computing
structural weights and comparing it with the materials at your disposal. For
any non-trivial application, it’s calling for a “safe disaster”, as in the
Fedora installer case.

Whenever I read “alas, we have GC so we can’t fail” sort of commends I get
shivers on my spine, and I pray various deities (different deities most
fitting in different contexts) never to have that sort of guy in my
workgroup.

My 2c
Giancarlo.


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org