So when we want deterministic process, we invent a set of labels where each is a singleton. Alongside them is a set of rules that specify how to describe their transformation. Then we invented machines that can interpret those instructions. The main advantage was that we know the possible outputs (assuming a good reliability) before we even have to act.
LLMs don't work so well in that regard, as while they have a perfect embedding of textual grammar rules, they don't have a good representation for what those labels refers to. All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.
Why would "membership in a set" not show up as a relationship between the items and the set?
In fact, it's not obvious to me that there's any semantic meaning not contained in the relationship between labels.
LLMs don't have access to these hidden attributes (think how to describe "blue" to someone born blind). They may understand that color is a property of object, or that "black" is the color you wear for funerals in some locations. But ask them how to describe the color of a specific object and the output is almost guaranteed to be wrong. Unless they are in a funeral in the above location, so he can predict that most people wear black. But it's a guess, not an informed answer.
So a program is a more restrictive version of the programming languages, which itself is a more restrictive version of a computer. But the tools to specify those restrictions are not perfect as speed and intuitiveness would suffer greatly (haskell vs python).
We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").
But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.
The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.
What is missed by many and highlighted in the article is the following: that there is no way to be "precise" with natural languages. The "operational definition" of precision involves formalism. For example, I could describe to you in english how an algorithm works, and maybe you understand it. But for you to precisely run that algorithm requires some formal definition of a machine model and steps involved to program it.
The machine model for english is undefined ! and this could be considered a feature and not a bug. ie, It allows a rich world of human meaning to be communicated. Whereas, formalism limits what can be done and communicated in that framework.