Latin had benefits because it was a dead language. We need a dead general purpose software language. Microsoft doesn’t want such a dead language. They want to change things locking you into upgrades. Others without that incentive have no idea what a good production language should be, usually going for the kitchen sink option.
One day I may fix that, but next month I’m just buying a compiler upgrade.
Well, you can still buy a 747 but the 747-8 is significantly different. Plus there is the A380 now. The airframes are switching to composites, actuators to electric, the new engines sip fuel and are way less noisy. Engine reliability has gone way up so you see more bimotors than in the past on long haul routes.
The SSME was complex machinery but never failed in a flight. NASA had plans to make improved engines with onboard sensors (e.g. SLI) but everything was canned. Staged combustion is a major step forward in rocket engine technology. The performance is superior.
Well, bad code is bad code, whether it’s in assembly language or Haskell (or whatever the flavor of the week is… Ruby was the hot ticket a few years back). The only “bad Fortran” I’ve run into was Cray Fortran, which had a buggy compiler (this was circa 1986).
But I’m told Python is a really cool language and I should learn to use it. I’m not convinced a bad programmer (or more likely, a good engineer conscripted into programming) will produce more bug-free code with Python than with Fortran. And I know which language is faster for the kind of programming I do. And performance is important… I have a Phys Rev Letter where the single data point at beta=12 required 100 CPU-days on a Silicon Graphics Indy cluster. I can’t imagine how long it would have taken to do that in Python. Of course, computers are faster now, but that just means if I wrote the paper today I would try to run beta=14.
A decade ago Wired ran an infographic on the family tree of computer languages. They pointed out that someone had calculated that around 2500 to 5000 computer languages had been invented since the dawn of computing (a bit more than 1 per week). Here’s my scan of their graphic, which of course only shows the major languages. It’s out of date now; I wish Wired would give us an update.
A couple of languages not on your chart are Rebol (Forthish) and Euphoria which was a nice clean language until it got turned over to the kitchen sink crowd (who aren’t smart enough to know that it’s not goto which creates spaghetti code, it’s labels as destinations that create spaghetti code.)
VB6 is about the only code I’d want to maintain even with it’s deficiencies.
I’d forgotten about Modula-2 and Prolog. I’d like to forget about SQL (thank goodness I don’t know much of it).
I think I remember why it was so difficult to port Fortran FEA code to C. In C an array reference is passed as a pointer to the base element, and the subroutine doesn’t automatically know anything about the array’s dimensions. There were a few other array issues that compounded the difficulty in a simple multi-pass text conversion approach.
Once upon a time I was tasked to translate a 7000-line PL/1 code into Fortran. The thing that drove me crazy was PL/1’s ability to scope a subroutine as local to another subroutine. So there were a dozen subroutines get getsoln() or somesuch, all of which did different things specific to the subroutine in which they were embedded. Eventually, I was able to get the whole thing translated, I believe correctly, but the effort failed because the program had to read in a big database of chemical properties. And that database was in binary, in the legendary encoding that IBM invented called EBCDIC — and each record was a mix of integer, floating, and character data. And EBCDIC, of course, was not supported on the VAX….
One of my friends at IBM needed to know exactly what their PC keyboard code on the 8051 microcontroller was doing. His calls to the keyboard folks went unanswered, so he finally accessed the chip to get a dump, then ran a disassembler and laboriously and thoroughly commented the result. There was one section of code that didn’t make any sense and didn’t seem to be character data. So he sent his reverse-engineered souce code to the keyboard folks in an e-mail. Fifteen minutes later his phone rings. “Where did you get this? It’s fabulous! We only have a couple copies of the source and they’re horrible.” Then my friend asked what the mystery code did. “Oh, that’s the IBM copyright notice in EBCDIC.
I have a Phys Rev Letter where the single data point at beta=12 required 100 CPU-days on a Silicon Graphics Indy cluster. I can’t imagine how long it would have taken to do that in Python. Of course, computers are faster now, but that just means if I wrote the paper today I would try to run beta=14
So touchingly old fashioned. Takes me back to my ILLIAC-IV days, with the armed Marine who stood vigil over it in N-244 at ARC – very touchy on loop unrolling IIRC. Quite an improvement from the cluster of CDC-7600’s running RUN76.
Then you have a Python script running in a recursive emulator (PyPy), which creates highly specialized loop unrolled assembly code from the supplied script of Python – the recursive emulator has a parsed tree you walk (and sometimes reorder). You end up with a very wide MIMD architecture at extremely high rate, where you end up throwing away a lot of cycles to avoid pipeline breaks.
The OO stuff allows you to “decorate” data flows to hint at how to form optimal instruction groups to max out the machine. Really fun to watch.
We are talking thousands of times faster that your Indy cluster. Soon we’ll reach 10-100x more.
There are no conventional compilers possible for extreme computing.
Latin had benefits because it was a dead language. We need a dead general purpose software language. Microsoft doesn’t want such a dead language. They want to change things locking you into upgrades. Others without that incentive have no idea what a good production language should be, usually going for the kitchen sink option.
One day I may fix that, but next month I’m just buying a compiler upgrade.
Well, you can still buy a 747 but the 747-8 is significantly different. Plus there is the A380 now. The airframes are switching to composites, actuators to electric, the new engines sip fuel and are way less noisy. Engine reliability has gone way up so you see more bimotors than in the past on long haul routes.
The SSME was complex machinery but never failed in a flight. NASA had plans to make improved engines with onboard sensors (e.g. SLI) but everything was canned. Staged combustion is a major step forward in rocket engine technology. The performance is superior.
Well, bad code is bad code, whether it’s in assembly language or Haskell (or whatever the flavor of the week is… Ruby was the hot ticket a few years back). The only “bad Fortran” I’ve run into was Cray Fortran, which had a buggy compiler (this was circa 1986).
But I’m told Python is a really cool language and I should learn to use it. I’m not convinced a bad programmer (or more likely, a good engineer conscripted into programming) will produce more bug-free code with Python than with Fortran. And I know which language is faster for the kind of programming I do. And performance is important… I have a Phys Rev Letter where the single data point at beta=12 required 100 CPU-days on a Silicon Graphics Indy cluster. I can’t imagine how long it would have taken to do that in Python. Of course, computers are faster now, but that just means if I wrote the paper today I would try to run beta=14.
A decade ago Wired ran an infographic on the family tree of computer languages. They pointed out that someone had calculated that around 2500 to 5000 computer languages had been invented since the dawn of computing (a bit more than 1 per week). Here’s my scan of their graphic, which of course only shows the major languages. It’s out of date now; I wish Wired would give us an update.
A couple of languages not on your chart are Rebol (Forthish) and Euphoria which was a nice clean language until it got turned over to the kitchen sink crowd (who aren’t smart enough to know that it’s not goto which creates spaghetti code, it’s labels as destinations that create spaghetti code.)
VB6 is about the only code I’d want to maintain even with it’s deficiencies.
I’d forgotten about Modula-2 and Prolog. I’d like to forget about SQL (thank goodness I don’t know much of it).
I think I remember why it was so difficult to port Fortran FEA code to C. In C an array reference is passed as a pointer to the base element, and the subroutine doesn’t automatically know anything about the array’s dimensions. There were a few other array issues that compounded the difficulty in a simple multi-pass text conversion approach.
Once upon a time I was tasked to translate a 7000-line PL/1 code into Fortran. The thing that drove me crazy was PL/1’s ability to scope a subroutine as local to another subroutine. So there were a dozen subroutines get getsoln() or somesuch, all of which did different things specific to the subroutine in which they were embedded. Eventually, I was able to get the whole thing translated, I believe correctly, but the effort failed because the program had to read in a big database of chemical properties. And that database was in binary, in the legendary encoding that IBM invented called EBCDIC — and each record was a mix of integer, floating, and character data. And EBCDIC, of course, was not supported on the VAX….
One of my friends at IBM needed to know exactly what their PC keyboard code on the 8051 microcontroller was doing. His calls to the keyboard folks went unanswered, so he finally accessed the chip to get a dump, then ran a disassembler and laboriously and thoroughly commented the result. There was one section of code that didn’t make any sense and didn’t seem to be character data. So he sent his reverse-engineered souce code to the keyboard folks in an e-mail. Fifteen minutes later his phone rings. “Where did you get this? It’s fabulous! We only have a couple copies of the source and they’re horrible.” Then my friend asked what the mystery code did. “Oh, that’s the IBM copyright notice in EBCDIC.
I have a Phys Rev Letter where the single data point at beta=12 required 100 CPU-days on a Silicon Graphics Indy cluster. I can’t imagine how long it would have taken to do that in Python. Of course, computers are faster now, but that just means if I wrote the paper today I would try to run beta=14
So touchingly old fashioned. Takes me back to my ILLIAC-IV days, with the armed Marine who stood vigil over it in N-244 at ARC – very touchy on loop unrolling IIRC. Quite an improvement from the cluster of CDC-7600’s running RUN76.
These days you build your own super computer out of GPU’s – see:
http://www.nrao.edu/meetings/bigdata/presentations/May4/1-Szalay/szalay-greenbank-2011.pdf
Then you have a Python script running in a recursive emulator (PyPy), which creates highly specialized loop unrolled assembly code from the supplied script of Python – the recursive emulator has a parsed tree you walk (and sometimes reorder). You end up with a very wide MIMD architecture at extremely high rate, where you end up throwing away a lot of cycles to avoid pipeline breaks.
The OO stuff allows you to “decorate” data flows to hint at how to form optimal instruction groups to max out the machine. Really fun to watch.
We are talking thousands of times faster that your Indy cluster. Soon we’ll reach 10-100x more.
There are no conventional compilers possible for extreme computing.
-nooneofconsequence.