Refresh trouble in X11

Or an OS like “Menuet OS”, with basic dev tools (ASM/C), kernel 100% in assembly, network support, and and …? some games ! (including Doom)

I think we’re going a little bit out of topic though :slight_smile:

— En date de?: Mar 27.1.09, Paulo Pinto a ?crit?:de: Paulo Pinto
Objet: Re: [SDL] Refresh trouble in X11
?: “A list for developers using the SDL library. (includes SDL-announce)”
Date: Mardi 27 Janvier 2009, 8h00

That is very easy to answer.

On those days the complete OS was in ROM. When you turned it on, the initialization code
just had to create a few data strutures in RAM, and you were set.

Now a normal computer has to:

??? - Do BIOS boot test verfication;
??? - Load OS from disk to RAM, while performing the following actions;
??? - dynamic linker has to search all required dynamic libraries for each executable, load them and

??? relocate the symbols;
??? - several processes are launched in paralel triggered lots of disk access requests;
??? - network configuration is started;
??? - and so on

So the computers nowadays a way lot faster, but we managed to slow them down by making them

do lots of stuff during the boot process. I think I was able to turn on + load a game on my ZX Spectrum,
quicker then I load XP on my laptop. :slight_smile:

If you look at the current solutions of having a Linux image in ROM in some new laptops, it goes exactly into

the direction that the computers used to be in the 80s. It also boots very quickly, under 30s.


Paulo

On Mon, Jan 26, 2009 at 6:52 PM, Mason Wheeler wrote:

----- Original Message ----

From: Bob Pendleton

Subject: Re: [SDL] Refresh trouble in X11

The rules change when the measure of every facet of our machines

increases by a factor of 1,000.

Now, my two first computer systems were an NES and an Apple IIe. ?I could stick a cartridge in the NES, hit the power button, and be up and running in under 15 seconds. ?How long does it take a PS3, Wii or Xbox 360 to boot up these days? ?(Or an NDS or PSP?)

Same goes for the IIe. ?Most of my games and other applications were up and running in under a minute once I hit the power switch. ?Now, if you’re lucky, you can boot a modern PC with Windows XP and reach the desktop in about a minute, but it still takes a while for the system to load autorun processes and stabilize, then you need to find your program and wait for it to load up. ?OSX is similar. ?Linux probably is too. ?(If you’re using Vista, you’ll be waiting even longer.)

If computers have increased in power by a factor of several thousands, why are they so much slower today?


SDL mailing list

SDL at lists.libsdl.org

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

The rules change when the measure of every facet of our machines
increases by a factor of 1,000.

Now, my two first computer systems were an NES and an Apple IIe. I could stick a cartridge in the NES, hit the power button, and be up and running in under 15 seconds. How long does it take a PS3, Wii or Xbox 360 to boot up these days? (Or an NDS or PSP?)

Same goes for the IIe. Most of my games and other applications were up and running in under a minute once I hit the power switch. Now, if you’re lucky, you can boot a modern PC with Windows XP and reach the desktop in about a minute, but it still takes a while for the system to load autorun processes and stabilize, then you need to find your program and wait for it to load up. OSX is similar. Linux probably is too. (If you’re using Vista, you’ll be waiting even longer.)

If computers have increased in power by a factor of several thousands, why are they so much slower today?

You do understand the difference in performance between jumping to an
address in ROM and having to first load several megabytes from a disk
before you can jump to it, right? You do understand that none of the
machines on the first list have enough memory to hold the runtime
used on any of the machines on your second list, right?

So, is this another joke that I don’t get? Or did you have a point?

Bob PendletonOn Mon, Jan 26, 2009 at 11:52 AM, Mason Wheeler wrote:

----- Original Message ----
From: Bob Pendleton <@Bob_Pendleton>
Subject: Re: [SDL] Refresh trouble in X11


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

±-------------------------------------+

You do understand the difference in performance between jumping to an
address in ROM and having to first load several megabytes from a disk
before you can jump to it, right? You do understand that none of the
machines on the first list have enough memory to hold the runtime
used on any of the machines on your second list, right?

So, is this another joke that I don’t get? Or did you have a point?

 Not a joke.  More of a gripe, really.  Why do so many people act

like having more hardware available gives them license to code big,
ugly, bloated messes and let the hardware brute-force through it? Why,
when we have computers that are approximately TEN FREAKING THOUSAND
TIMES STRONGER IN EVERY WAY, do they run SLOWER than the computers I
used as a child?!?
I’ve heard it referred to as the Gas Law of Software: “Your
executable will gradually expand until it fills all available
resources.” (A friend of mine prefers the term Moore’s Flaw.) Instead
of taking advantage of better hardware to achieve better performance,
people use it as a crutch to enable them to get away with writing worse
code. You start with people too lazy to take the time to clean up
their own objects, and end up with the sort of abject idiocy you see in
Java and the .NET framework, where there are no true primitive types;
even integers, booleans and chars are objects that need to be
constructed. (
http://msdn.microsoft.com/en-us/library/s1ax56ch(VS.80).aspx ) And
people think that that somehow makes for better programming?!? All it
really results in is lazy, ignorant programmers whose technique and
skill level is no more developed than a child building with Legos, who
have no clue where to even start looking when something goes seriously
wrong, because they don’t undertand what’s going on under the hood in
the first place.
But try explaining that to anyone who’s drank a few too many cups
of Managed Code Kool-Aid. They’ll talk your ears off about how all
these fundamental flaws are “helpful features,” (it’s a feature, not a
bug?), mutter about how you just don’t get it, maybe rhapsodize for a
bit about how much easier it makes programming, and then write you
off as a philistine and go back to writing their bloatware.

 OK, so I guess that was more of a rant than a gripe.  But I'm not the only one concerned about crappy new languages that try to make coding "easy" dumbing down programmers and encouraging poor coding skills.  Check out http://www.stsc.hill.af.mil/crosstalk/2008/01/0801dewarschonberg.html or http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html .  They're talking about Java specifically, but you can replace "Java" with ".NET",  "Python" or "Ruby" and get the same concerns.  I read an article several years ago that argued that every programmer should be required by law to make sure that his or her code will compile and execute properly on a 486.  I wouldn't quite go that far, but the basic principle is sound.

“There is no programming problem that can’t be solved by adding another layer of abstraction, except for the problem of too many layers of abstraction.”

  • Francisco Lopez-Alvarez>----- Original Message ----

From: Bob Pendleton
Subject: Re: [SDL] Refresh trouble in X11

I read an article several years ago that argued that every programmer should be required by law to make sure that his or her code will compile and execute properly on a 486. I wouldn’t quite go that far, but the basic principle is sound.

I use valgrind on my code a lot. Finds memory leaks, uses of
uninitialized memory, and keeps one honest with performance, even on
ridiculously overpowered machines, and if that doesn’t do, use the
callgrind tool on it, which at the same time will lead you to the
hotspots very easily. :-)On Wed, Jan 28, 2009 at 5:56 PM, Mason Wheeler wrote:


http://pphaneuf.livejournal.com/

I read an article several years ago that argued that every programmer should be required by law to make sure that his or her code will compile and execute properly on a 486. I wouldn’t quite go that far, but the basic principle is sound.

I use valgrind on my code a lot. Finds memory leaks, uses of
uninitialized memory, and keeps one honest with performance, even on
ridiculously overpowered machines, and if that doesn’t do, use the
callgrind tool on it, which at the same time will lead you to the
hotspots very easily. :slight_smile:

So you actually know what a profiler is and how to use one?
You, sir, are a gentleman and a scholar, and a beacon of civilization
in these dark times. :P>----- Original Message ----

From: Pierre Phaneuf
Subject: Re: [SDL] Refresh trouble in X11

Hi,

speaking as someone that drank a lot of managed kool aid on these last
years.

On those days software was developed mostly in very small scale and there
were lots of things
that a 486 could not even dream of doing.

Nowadays, managed languages allow me, together with teams around 300
persons, scattered
across the globe to develop software that runs in clusters and processes
gigabytes of data per hour.

We too use profilers, and lots of other tools for perform code analysis and
verification. And are very
happy that a programmer working on the project, but another site has not
made a pointer error that
kills our server. Which on the old days might mean a very angry customer
while we create a task
force to track down the issue.

Assembly and C are still required, but even then, nowadays processors are so
complex that most
compilers will outperform hand coded assembly. Specially if the so called
managed language make
a good use of profiled guided optimizations on its JIT.–
Paulo

On Wed, Jan 28, 2009 at 11:56 PM, Mason Wheeler wrote:

----- Original Message ----

From: Bob Pendleton
Subject: Re: [SDL] Refresh trouble in X11

You do understand the difference in performance between jumping to an
address in ROM and having to first load several megabytes from a disk
before you can jump to it, right? You do understand that none of the
machines on the first list have enough memory to hold the runtime
used on any of the machines on your second list, right?

So, is this another joke that I don’t get? Or did you have a point?

 Not a joke.  More of a gripe, really.  Why do so many people act

like having more hardware available gives them license to code big,
ugly, bloated messes and let the hardware brute-force through it? Why,
when we have computers that are approximately TEN FREAKING THOUSAND
TIMES STRONGER IN EVERY WAY, do they run SLOWER than the computers I
used as a child?!?
I’ve heard it referred to as the Gas Law of Software: “Your
executable will gradually expand until it fills all available
resources.” (A friend of mine prefers the term Moore’s Flaw.) Instead
of taking advantage of better hardware to achieve better performance,
people use it as a crutch to enable them to get away with writing worse
code. You start with people too lazy to take the time to clean up
their own objects, and end up with the sort of abject idiocy you see in
Java and the .NET framework, where there are no true primitive types;
even integers, booleans and chars are objects that need to be
constructed. (
http://msdn.microsoft.com/en-us/library/s1ax56ch(VS.80).aspxhttp://msdn.microsoft.com/en-us/library/s1ax56ch(VS.80).aspx) And
people think that that somehow makes for better programming?!? All it
really results in is lazy, ignorant programmers whose technique and
skill level is no more developed than a child building with Legos, who
have no clue where to even start looking when something goes seriously
wrong, because they don’t undertand what’s going on under the hood in
the first place.
But try explaining that to anyone who’s drank a few too many cups
of Managed Code Kool-Aid. They’ll talk your ears off about how all
these fundamental flaws are “helpful features,” (it’s a feature, not a
bug?), mutter about how you just don’t get it, maybe rhapsodize for a
bit about how much easier it makes programming, and then write you
off as a philistine and go back to writing their bloatware.

OK, so I guess that was more of a rant than a gripe.  But I'm not the

only one concerned about crappy new languages that try to make coding “easy"
dumbing down programmers and encouraging poor coding skills. Check out
http://www.stsc.hill.af.mil/crosstalk/2008/01/0801dewarschonberg.html or
http://www.joelonsoftware.com/articles/ThePerilsofJavaSchools.html .
They’re talking about Java specifically, but you can replace “Java” with
”.NET", “Python” or “Ruby” and get the same concerns. I read an article
several years ago that argued that every programmer should be required by
law to make sure that his or her code will compile and execute properly on a
486. I wouldn’t quite go that far, but the basic principle is sound.

“There is no programming problem that can’t be solved by adding another
layer of abstraction, except for the problem of too many layers of
abstraction.”

  • Francisco Lopez-Alvarez

SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

This is getting way off-topic, so I figured I’d just send it to you instead of the whole
list.

Just out of curiosity, what sort of software do you do? I work on a similar scale. I’m
part of a company of approximately 150 people, most of whom are on the non-technical
side of things. (Admin, sales, QA, helpdesk, etc.) Our coding staff comprises about 30
developers. We do a multitier database app that’s deployed on hundreds-of-gigabyte
databases around the world; it’s the industry-leading program for TV station scheduling
management and it also runs on a gigabytes-per-hour scale, and stress testing shows
that the main limiting speed factor is the bandwidth of the network it’s run on.

Codewise, it’s a big, unwieldy beast–about 3 million lines of code for the main app,
plus a handful of dependencies. (DLLs,middle-tier servers, etc.) I’m having a bit of
trouble wrapping my head around the sort of software it would take a 300-person team
to create. How do y’all keep from tripping over each other?

And JFTR, we’ve never had a pointer error that kills the client that I’m aware of, because
Delphi has very robust exception-handling support. (If you use C#, you’re probably
familiar with it already; Microsoft basically created the .NET framework by hiring the guy
who created Delphi away from Borland and having him re-implement the whole thing in
C syntax. The only .NET features that don’t have us Delphi guys wondering “so what’s the
big deal? We’ve been doing that in native code for 13 years now” are some of the more
exotic Reflection tricks and LINQ, and we’re expecting to get native LINQ RSN.)

Anyway, what do you do that takes 300 coders? I’d guess something like OSes, but
OSes don’t tend to make heavy DB traffic one of their priorities…________________________________
From: pjmlp@progtools.org (Paulo Pinto)
To: A list for developers using the SDL library. (includes SDL-announce)
Sent: Thursday, January 29, 2009 1:58:37 AM
Subject: Re: [SDL] Refresh trouble in X11

Hi,

speaking as someone that drank a lot of managed kool aid on these last years.

On those days software was developed mostly in very small scale and there were lots of things
that a 486 could not even dream of doing.

Nowadays, managed languages allow me, together with teams around 300 persons, scattered
across the globe to develop software that runs in clusters and processes gigabytes of data per hour.

We too use profilers, and lots of other tools for perform code analysis and verification. And are very
happy that a programmer working on the project, but another site has not made a pointer error that
kills our server. Which on the old days might mean a very angry customer while we create a task
force to track down the issue.

Assembly and C are still required, but even then, nowadays processors are so complex that most
compilers will outperform hand coded assembly. Specially if the so called managed language make
a good use of profiled guided optimizations on its JIT.


Paulo

This is getting way off-topic, so I figured I’d just send it to you instead
of the whole list.

The evil that is Reply-To headers strikes again! ;-)On Thu, Jan 29, 2009 at 8:43 AM, Mason Wheeler wrote:


http://pphaneuf.livejournal.com/

Just out of curiosity, what sort of software do you do? I work on a similar
scale. I’m
part of a company of approximately 150 people, most of whom are on the
non-technical
side of things. (Admin, sales, QA, helpdesk, etc.) Our coding staff
comprises about 30
developers. We do a multitier database app that’s deployed on
hundreds-of-gigabyte
databases around the world; it’s the industry-leading program for TV station
scheduling
management and it also runs on a gigabytes-per-hour scale, and stress
testing shows
that the main limiting speed factor is the bandwidth of the network it’s run
on.

Oh, and by the way, large chunks of Google is done in Java
(http://java.sun.com/developer/technicalArticles/J2SE/google/limoore.html)
and Python (things like App Engine). They have, uh, large amounts of
everything. :-)On Thu, Jan 29, 2009 at 8:43 AM, Mason Wheeler wrote:


http://pphaneuf.livejournal.com/

Heh. I don’t know what’s worse: Reply-To headers sending things to the wrong location when I
don’t want them to, or my phone’s inability to grok Reply-To headers when I do want to! :P> ----- Original Message -----

From: pphaneuf@gmail.com (Pierre Phaneuf)
To: A list for developers using the SDL library. (includes SDL-announce)
Sent: Thursday, January 29, 2009 8:10:34 AM
Subject: Re: [SDL] Refresh trouble in X11

On Thu, Jan 29, 2009 at 8:43 AM, Mason Wheeler <@Mason_Wheeler> wrote:

This is getting way off-topic, so I figured I’d just send it to you instead
of the whole list.

The evil that is Reply-To headers strikes again! :wink:


http://pphaneuf.livejournal.com/


SDL mailing list
SDL at lists.libsdl.org
http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org