That's a good way to put it. It's pretty hard to convey this to someone who hasn't actively tried and solved real problems in such languages though. You don't realize how much the "words get in the way" (as Granny Weatherwax would say) until you give an array language a good chance.
Another pop-culture quote that resonated in this regard is from The Matrix:
> Your brain does the translating. I don't even see the code. All I see is blonde, brunette, redhead.
All I see is range, sum-over, divide. The symbols turn into the concepts directly in your head - not as conscious translation, but in the way I imagine Chinese or Japanese kanji characters translate directly in the head of a native speaker.
I understand (though do not personally agree with) the appeal of extreme terseness — there are arguments to be made that maximizing information density minimizes context switching, since it reduces the amount of scrolling. Personally, I find that large displays and split buffers mitigate this issue, whereas the mental overhead of using my short-term memory to map dozens of single letter variables to semantic definitions is much higher than having to flip between splits/marks in my text editor. (The fact that the aforementioned Iverson Ghost languages are popular whereas APL and its derivatives are not is evidence that I'm aligned with most people in this regard.)
I don't understand why people rarely make the terseness argument for non-array languages, even though it's just as easy to write tersely in them — the Obfuscated C Competition is a prime example [1]. Is it just due to the influence of APL, or is there something special about array languages that gives them a unique proclivity for terseness?
[0] https://dev.to/bakerjd99/numpy-another-iverson-ghost-9mc
[1] https://www.ioccc.org, https://github.com/ioccc-src/winner/blob/master/2020/ferguso...
This constraint leads to symbol overloading. But careless, rampant overloading results in the same problem - too many things to remember. So you have to constrain your overloads.
With these constraints, if you want to design a practical, usable, general-purpose language (without forcing users to define every useful thing themselves), you have to choose composable abstractions. Prioritizing a single data structure (arrays) lets you focus your design effort and historically has good mechanical sympathy with the available computers, but there could just as easily be an "APL, but for associative maps" type language.
My point is "good terseness" comes from a holistic design approach, and simply making a bad language more terse will makes its flaws more obvious.
Many constructs take less characters to algorithmically specify than to name: (Examples in K because that’s what I know best):
(+/x)%#x computes the average of a vector x; or the average of each column in a 2D matrix x; or other averages for other constructs. It takes about as much characters to spell “average”, which is considers too short a name in modern C or Java - and yet, the code is instantly recognizable to any K programmer, needs no documentation about how it deals with NaNs or an empty vector or whatever (which your named C/Java/K routine would - does it return 0? NaN? Raise an exception? Segfault?)
And ,// (yes - that’s comma slash slash) flattens a recursive list. Way shorter than its name , and it’s the entire implementation.
Are these the most numerically stable / efficient ways to average or flatten? No. But they are the least-cognitive-load, fastest-to-grasp-and-pattern-match when reading code. Once you are used to them.
The appeal of Iverson languages also comes from a good selection of primitives. Most modern languages such as C++, Python, Nim, even Rust have an implicit focus on the “meta” programming: they give you the tools (templates, macros, classes) to build abstractions, with which you later build your actual computation. K / J / APL / BQN expect you to do the computation with much fewer abstractions - but provide primitives that make that incredibly easy.
For example, there is a “grade” primitive which returns a vector that - if used to index your original list - would sort it.
Now, say you have a list of student names, and ages, and you wish to sort them - once alphabetically, once by age. In idiomatic C++/Python etc, you’d have a “student” class with three fields. Then you’d write some comparator functions to pass to your sort routine. (I am aware of accessors and the key arg to pythons sort; assume for a second they aren’t there).
In K/APL/J, you’d just have 3 lists whose indices correspond: and then it is just:
name[<age]
Read “name indexed by grading of ages”. They’re a terser version: name@<age read: “name at grade of age”The terseness compounds. Once you are used to it, every other programming language seems so uselessly bloated.
None of these things apply to obfuscated or shortened C.
Arthur released K source code, which is C written in the same style. It does not have the same appeal.
For example, the NumPy equivalents of your examples are not materially longer than their APL/J equivalents, but are easily readable even by people unfamiliar with NumPy:
> the average of a vector x; or the average of each column in a 2D matrix x
x.mean(0)
or, to use your example verbatim, x.sum(0)/x.shape[0]
> flattens a recursive list x.ravel()
> name indexed by grading of ages name[age.argsort()]
Though for this application, you’d probably be using a dataframe library like Pandas, in which case this would be df.sort_values(“age”)[“name”]
The call to .ravel() is strictly less powerful than ,// which would flatten a matrix, but also a lisp/xml style recursive list structure. And it is the actual implementation, not some weird name! It is “join over until convergence”.
With respect to sorting, in K you would also likely use the built in relational operator “?” (select).
Notice how you need to import pandas and numpy, and then know their docs well to find the routines you want and how they behave in edge cases? And that’s in addition to actually knowing Python?
K has all of that built in. You just need to know the basics (which takes more work than knowing Python well, admittedly). Most from there is derived by construction. It does have some 80 or so non-trivial primitives, but then you need much fewer libraries, often none.
(And, that’s not a for/against thing, but … in case you wonder, the K executable does that in about 200K binary without dependencies; REBOL achieves similar terseness of final programs by completely different means and philosophy, and also packs that into a 400K executable)
APL-style idioms are not at all comparable to functions on a class or within a library, because idioms are self-describing in their entirety, requiring only an understanding of the primitive operators of the language, whereas a named function obscures and subordinates detail.
I wish APL derivatives embraced some of these ideas, and made their magic spells easier to parse visually, and to format for readability. I don't know how to achieve that easily. Mathematical notation took centuries to develop. It took quarter of a century for programming languages to normalize indentation. Maybe APLesque languages will eventually come up with a notation that's less impenetrable than APL / K / Q, but less verbose than Pandas.
I think it's important to remember that APL was born of an era where having a keyboard dedicated to a single programming language was reasonable.
I think there's a lot of middle ground between the more line noise syntax of say jsoftware, or a pure lisp style prefix notation.
This is a little snippet from Stevan Apter, a k programmer that has an old school home page with a lot of array language curiosities:
https://nsl.com/papers/kisntlisp.htm
If you look at the pseudo code example at the end using words vs ascii glyphs, I think that's quite readable while also concise. It uses k's bracket notation to easily specify slices of nested arrays.
Interestingly kxsoftware themselves went this route with q (Stevan's essay predates this I believe). There they kept nearly all the power of k but exposed it va a more approachable sql like syntax.
I think that's exactly on point. Here are two examples from Euclid which demonstrate why mathematics, and likely science, were stunted for millenia:
"if a first magnitude and a third are equal multiples of a second and a fourth, and a fifth and a sixth are equal multiples of the second and fourth, then the first magnitude and fifth, being added together, and the third and sixth, being added together, will also be equal multiples of the second and the fourth, respectively."
a(x + y) = ax + by
"If a quantity increased by three becomes five, then that quantity has to be two".
X + 3 = 5
X = 2
I would probably prefer this, because then I could actually look up the unfamiliar terms and have some hope of eventually figuring out what the equation meant, instead of being brought up short and stumped by inscrutable notation.
I can imagine that it's a much different situation with array languages, where the notation is actually learnable, because it is defined and documented.
>This allows one to think faster and further than they could encumbered but a heavier syntax.
How does the programming language limit your ability or speed of thinking, lest when the fundamental data types and operations are the same? The hard work is always knowing what to implement. Saving some keystrokes for reversing an array or whatever other array manipulation is hardly a game changer.
If I am thinking about any problem, its outside the scope of any programming language in the first place. My point is, modern functions already make array manipulations simple enough. Even if you're doing something like coding LLMs from scratch, numpy, list concatenations, list comprehension, lambdas, stream/map/reduce all exist and its not nearly an issue to implement them, as is the case for writing assembly vs python.
The prime example in python for example looks like this: all(x % i != 0 for i in range(2, x)) This pretty much does the same operations in the same order on the same fundamental data structure, so I just don't see what's fundmenetally different about the Klong way of thinking.
Anyway, I don't mean to argue, if it works for you great, I wish I had something new
An example of this is APL.jl [1][2]
It seems more of a proof-of-concept than a real implementation though - at least, the APL wiki calls it a "toy dialect of APL" and says that it "works on a minor subset of APL". [3]
[1] demo: https://nbviewer.org/gist/shashi/9ad9de91d1aa12f006c4 [2] repo: https://github.com/shashi/APL.jl [3] https://aplwiki.com/wiki/APL.jl
It's worth noting that almost all array languages are interpreted, and iteratively constructing a program in the interpreter is common. This style would be painful in a more verbose and lower-level language but works great for ultra-high-level array languages.