Well, it did eliminate some kinds of coding (the part where human produces machine code from block diagrams) and debugging (the part where you look for errors in the above translation.)
acqq
Exactly. There are things that FORTRAN did right and that deserve to be studied even now:
Just a day ago I've mentioned that the printf equivalent that existed in FORTRAN as early as 1956 was able to do the type checking of the parameters and the compile-time code generation, versus the run-time interpretation as in C's printf.
Yes, and the paper distinguishes between programming and coding. Step one is "Analysis and Programming", step two is "Coding." It's step two that FORTRAN virtually eliminates.
gruseom
Correct. It's only because of the success of higher-level languages like FORTRAN that "coding" and "programming" came to mean the same thing. Before that, programming was part of the requirements.
We've become inured to silver-bullet bullshit in software, but it's an anachronism to think that about this report, which was dead right. In fact by subsequent standards their claim was rather modest. The full quote reads:
Since FORTRAN should virtually eliminate coding and debugging, it should be possible to solve problems for less than half the cost that would be required without such a system.
Peaker
Compared to hand-writing machine code, it might even be a reasonable thing to say. The majority of effort is eliminated.
DeepDuh
Being currently involved in GPGPU research, these descriptions remind me of the state we are in for GPGPU computing. CUDA, OpenCL and now OpenACC have been steps towards a higher-level abstraction of stream computing and every time a new framework / language bubbles up, the inventors praise it as the end of coding close to the machine.
jhrobert
Half a century later, people are still overly optimistic about software development. According to recent studies, this is true for everything (and society at large is actually in favor of it).
Yet it is particulary visible in computing. Why is that?
Side note: bullshit in marketing brochure is here to stay.
maclaren
Half a century later the change isn't from assembly to an actually useful level of abstraction. The bullshit-o-meter is damped further when half the document shows code examples that would be much more difficult in assembly.
koeselitz
What's overly optimistic about saying that interpreted languages will virtually eliminate the need to hand-debug machine code? Isn't that what happened?
mrgoldenbrown
If you limit yourself to the types of programs that were created before FORTRAN existed, then this might be true. But of course as capability increased, demand for more complicated program increased just as fast (or faster?)
ericHosick
This vision of eliminating coding will eventually be realized. It is inevitable.
However, people will still need to know how to program (AI not being a factor).
josefonseca
> eliminating coding will eventually be realized
> people will still need to know how to program
I think that by coding, you mean typing? In your example, we won't be eliminating coding, we'll just be entering code using a different language.
ericHosick
Not coding and not a programming language to speak of. For example, when someone configures their browser to use a proxy server they are programming without coding (though they do need to type).
It could be possible to verbally configure the browser to use a proxy server and I guess that would be the different language you speak of. However, this leads to the need for AI or some kind of "intelligent" system: difficult to make one that isn't domain specific.
Though such an intelligent system is also inevitable (in my opinion), I think there is a step between describing software using such a system and writing code as we do today.
This step would be some kind of domain agnostic software framework that can be used to create software without the need to "code out" a solution. The framework itself would need to be coded, but the usage of the framework would not.
josefonseca
Coding is necessary for computing, it's not going away, just like math isn't going away due to the evolution of computers. The better computers will use math better, but math is there because it's the basis for what we call "computing". Coding is the method by which we turn our ideas into practical logic.
If you think typing in long programs is going away, you may be right - the future looks more like a Lego type of programming rather than the current sea of logical equations. But in that case I'd say you've coded your message in the form of lego blocks, but coding is still there in essence.
When you speak instructions into a computer, you've still coded. When you change your proxy like in your example, you've coded(in a non imperative way).
koeselitz
Thankfully, they were right - it did. I've known a lot of C programmers, C++ programmers, Python programmers, and Java programmers; only a tiny handful of those programmers actually knew how to "code" (that is, write machine code.) FORTRAN, and the interpreted languages that came after it, really did "virtually eliminate coding and debugging."
lifthrasiir
Please mind that the report was written in 1954. The real complexity of programming and computing in general was not fully understood at that time. (I'm confident that we still do not understand it in its entirety, however.)
Edootjuh
What do you mean by that, exactly? Just out of curiosity.
praptak
Many results about unfeasibility of computer-based solutions to some problems were not known. For example the first important results about NP-hardness come from the seventies.
I believe it wasn't even clear that the big O complexity is important in assessing how effective an algorithm is. This is pretty much obvious to us now (sometimes too obvious - there are some edge cases when the constant factor wins over the asymptotic complexity).
scott_s
I agree with everything you said, except the parenthetical. I have difficulty considering matrix multiplication an "edge case," and I think that it's more common than you imply for us to choose algorithms with a higher asymptotic bound because of constant factors and architectural effects (mostly caching).
praptak
Ok, agreed about matrix multiplication and probably a few other problems, disagreeing about cache.
You cannot really say that cache-aware algorithms have higher asymptotic bound. Some of them might happen to have below-optimal asymptotic bound in a cache-unaware memory model, which sort of misses the point of the algorithms being aware of cache.
scott_s
You cannot really say that cache-aware algorithms have higher asymptotic bound.
Don't worry, I'm not. I'm saying that sometimes, naive algorithms have better cache behavior than more complicated algorithms with lower asymptotic bounds.
michaelochurch
Not grandparent post.
I think that one of the differences is that, in 1954, computer programs were single entities written by one person or by a team working closely together. So the "between two pieces of working code" bugs-- which tend to be the nastiest kind in modern development-- weren't really seen yet. Million-line codebases weren't even on the table as a reasonable concept, and the idea of a program depending on 120 other libraries or frameworks was unimaginable.
scott_s
I agree with everything above, but I would like to sum it up in my own words: we (as a species) were so new at writing software, that we had no idea of its inherent complexities. Programming was an after-thought for those that designed the first computers.
Zaak
As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.
http://groups.engin.umd.umich.edu/CIS/course.des/cis400/fort...
Just a day ago I've mentioned that the printf equivalent that existed in FORTRAN as early as 1956 was able to do the type checking of the parameters and the compile-time code generation, versus the run-time interpretation as in C's printf.
http://news.ycombinator.com/item?id=3964475
We've become inured to silver-bullet bullshit in software, but it's an anachronism to think that about this report, which was dead right. In fact by subsequent standards their claim was rather modest. The full quote reads:
Since FORTRAN should virtually eliminate coding and debugging, it should be possible to solve problems for less than half the cost that would be required without such a system.
Yet it is particulary visible in computing. Why is that?
Side note: bullshit in marketing brochure is here to stay.
However, people will still need to know how to program (AI not being a factor).
I think that by coding, you mean typing? In your example, we won't be eliminating coding, we'll just be entering code using a different language.
It could be possible to verbally configure the browser to use a proxy server and I guess that would be the different language you speak of. However, this leads to the need for AI or some kind of "intelligent" system: difficult to make one that isn't domain specific.
Though such an intelligent system is also inevitable (in my opinion), I think there is a step between describing software using such a system and writing code as we do today.
This step would be some kind of domain agnostic software framework that can be used to create software without the need to "code out" a solution. The framework itself would need to be coded, but the usage of the framework would not.
If you think typing in long programs is going away, you may be right - the future looks more like a Lego type of programming rather than the current sea of logical equations. But in that case I'd say you've coded your message in the form of lego blocks, but coding is still there in essence.
When you speak instructions into a computer, you've still coded. When you change your proxy like in your example, you've coded(in a non imperative way).
I believe it wasn't even clear that the big O complexity is important in assessing how effective an algorithm is. This is pretty much obvious to us now (sometimes too obvious - there are some edge cases when the constant factor wins over the asymptotic complexity).
You cannot really say that cache-aware algorithms have higher asymptotic bound. Some of them might happen to have below-optimal asymptotic bound in a cache-unaware memory model, which sort of misses the point of the algorithms being aware of cache.
Don't worry, I'm not. I'm saying that sometimes, naive algorithms have better cache behavior than more complicated algorithms with lower asymptotic bounds.
I think that one of the differences is that, in 1954, computer programs were single entities written by one person or by a team working closely together. So the "between two pieces of working code" bugs-- which tend to be the nastiest kind in modern development-- weren't really seen yet. Million-line codebases weren't even on the table as a reasonable concept, and the idea of a program depending on 120 other libraries or frameworks was unimaginable.
— Maurice Wilkes