Ya’know… I understand why you want to believe what you said, but
the fact is that none of it has been true for about 40 years now. At
least not for more than 99% of programmers.
Funniest thing I ever saw, back in oh… I guess it was '89… I was
working on development tools for SIMD processors used in high
performance graphics workstations. We were having a status meeting
with a research group at a university north of Denver. Our best
microcoder said just what you just said. He had a bit of code that he
had hand optimized and he challenged researchers to show that their
compiler could do as well. Ok, so they coded it up in a C like
language and ran it through their compiler. Took all of ten minutes.
The programmer had spent more than a week on his. The compiled code
was shorter, and faster, and correct. When both pieces of code were
run they got two different answers, turns out the hand coded code was
wrong. The compiled code benefited from the use of an algorithm that
first flipped all the loops, then run unrolled them, then rerolled
them to make use of all available parallelism, and then the
instructions got scanned and repacked both forward and backward.
Saw the same story when I watched a bunch of asm coders forced into
using PLUS (a UNIVAC equivalent of C that looks like Pascal) back in
’79. I’ve seen this over and over and over again.
The simple fact is that the human mind can only keep track of 7 +/1 2
things in short term memory. (why do you think a phone number is 7
digits long?) Any computer can keep track of millions of things. That
means that no matter what your opinion, for the most part, compilers
do a better job of global and local optimization. Show me a programmer
who can use trace data to decide to generate those nasty short
functions in line instead of out of line on a function by function
basis and on the basis of the actual use of the code. Show me a
programmer who can keep track of the probable cache footprint on code
in different source files and optimize access patterns to prevent
cache thrashing.
OTOH, it is possible that no compiler can do as well as you. There
are some astonishingly good programmers in the world. And, you could
well be one of them. (I used to generate 10 to 20 times more bug free
code than the people I worked with, and I know of people who are 10 or
more times more productive than I ever was.) Then consider that if you
are that good, then 98% of the rest of programmers are not good enough
to do as well as a compiler. Then consider that if you used your skill
to learn how to leverage a good optimizing compiler that you could
then be many times better than you currently are.
Bob PendletonOn Tue, Jun 9, 2009 at 4:06 PM, Mason Wheeler wrote:
----- Original Message ----
From: Zolt?n K?csi
Subject: Re: [SDL] [OT] Dao language 1.0.1 with SDL and OpenGL bindings
All JIT-optimized programs will soon be faster than precompiled
programs.
Can you elaborate on it? At first glance, it seems to me that the
compiler has to compile the program, whether it is at runtime or in
compile time. The only advantage of the JIT compiler would be that it
can see the actual data set the program is running on. That might seem
a great advantage at first sight, but only if the program is repeatedly
processes the same data set without mutating it (which is not what
programs really do). If the JIT compiler does not take the data into
account, then it can not do better than a compiler that optimises the
code before the first run (in fact worse, since it consumes run-time as
opposed to the traditional compile-time), if it takes the data set into
account then it has to re-run whenever the data set changes and thus
adding a run-time penalty for the re-compilation and optimisation.
Unless the optimisation based on the data set has an additional gain on
the program fragment that is more than the JIT compilation time, you
are not gaining anything.
I think what he means is that it a JIT will be able to do little
hardware-specific tweaks like optimizing for the number of registers
and special instructions (SIMD, etc) available on the CPU. ?Problem
is, in that doesn’t make up for the extra overhead and the design-time
optimization abilities you lose by writing in a managed language.
Ask any experienced gamer and they’ll tell you that artificial intelligence
is never a match for natural stupidity. ?People can think and reason and
make informed decisions. ?All a computer can do is match patterns,
and until we get true AI, (I’m thinking computers we can hold a
natural-language conversation with, like on Star Trek,) there will never
be a computer program that can do a better job of managing your code
in the general case than an experienced programmer could do.
Managed programs suffer from several performance issues. ?In particular,
they’re very hard on the cache. ?They tend to be dependent on large
external frameworks, which screws up locality of reference for CPU
instruction caches. ?They also tend to use tracing garbage collectors,
which bloats the memory footprint for bookkeeping (leading to bad
locality of reference for CPU data caches,) slows down performance,
and encourages poor programming leading to memory leakage by
coders who have never learned to be aware of this issue. ?Teaching
the compiler to use more CPU registers can’t make up for a program
that’s unable to cache properly, especially since it’s considered good
practice these days to keep individual functions as small as possible,
leading to frequent calls and returns between different methods.
SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
–
±----------------------------------------------------------