Preferences


In my mind this highlights something I've been thinking about, the differences between FOSS influenced by corporate needs vs FOSS driven by the hacker community.

FOSS driven by hackers is about increasing and maintaining support (old and new hardware, languages etc..) while FOSS influenced by corporate needs is about standardizing around 'blessed' platforms like is happening in Linux distributions with adoption of Rust (architectures unsupported by Rust lose support).

> while FOSS influenced by corporate needs is about standardizing around 'blessed' platforms like is happening in Linux distributions with adoption of Rust

Rust's target tier support policies aren't based on "corporate needs". They're based, primarily, on having people willing to do the work to support the target on an ongoing basis, and provide the logistics needed to make sure it works.

The main difference, I would say, is that many projects essentially provide the equivalent of Rust's "tier 3" ("the code is there, it might even work") without documenting it as such.

The Rust Community is working on gcc-rs for this very reason.
gcc-rs is far from being usable. If you want to use Rust with gcc-only targets you're probably better off with rustc_codegen_gcc instead.
One could also compile to wasm, and then convert that wasm to C.
It sounds convenient %)
The issue is that certain specific parts of the industry currently pour in a lot of money into the Rust ecosystem, but selectively only where they need it.
How is that different than scratching one’s own itch?
Personal itches are more varied and strange than corporate itches. What companies are willing to pour time (money) into is constrained by market forces. The constraints on the efforts of independent hackers are different.

Both sets of constraints produce patterns and gaps. UX and documentation are commonly cited gaps for volunteer programming efforts, for example.

But I think it's true that corporate funding has its own gaps and other distinctive tendencies.

It is not, but the open-source community should be aware of this and not completely realign reorganize around the itches of specific stakeholders, at least the parts of the community who are not paid by those.
The big difference is that Algol 68 is set in stone. This is what allows a single dedicated person to write the initial code and for it to keep working essentially forever with only minor changes. The Rust frontend will inevitably become obsolete without active development.

Algol 68 isn’t any more useful than obsolete Rust, however.

The core Algol 68 language is indeed set in stone.

But we are carefully adding many GNU extensions to the language, as was explicitly allowed by the Revised Report:

  [RR page 52]
  "[...] a superlanguage of ALGOL 68 might be defined by additions to
   the syntax, semantics or standard-prelude, so as to improve
   efficiency or to permit the solution of problems not readily
   amenable to ALGOL 68."
The resulting language, which we call GNU Algol 68, is a strict super-language of Algol 68.

You can find the extensions currently implemented by GCC listed at https://algol68-lang.org/

I had a small programming task a while ago, and decided to try doing it algol68 (using the algol68 genie interpreter) simply because I'd had some exposure to the language many years ago at Uni.

It was an AWK like task, but I decided up front it was too much trouble to do in AWK, as I needed to build a graph of data structures from the input data.

In part the program had an AWK like pattern matching and processing section, which wasn't too awkward. I found having to use REF's more trouble that dealing with pointers, in part due to the forms of auto dereferencing the language uses; but that was expected.

The real problem though was that I ended up needing something like a map / hash-table, and I concluded it was too much trouble to write from scratch.

So in the end I switched the program to be written in Go.

That then suggests a few things to me:

    - it should have an extension library (prelude) offering some form of hash table.

    - it would be useful to add syntax for explicit pointers (PTR keyword) which are not automatically dereferenced when used.

    - maybe have with either something like the Go (or Zig) style syntax for selecting a member of a pointed to struct (a.b) and maybe Zig like explicit defer (ptr.\*).
That latter pointer suggestions because I found the "field OF struct" form too verbose, and especially confusing when juggling REFs which may or may not get auto dereferenced.
It's funny, I have a different view. Corporates often need LT maintenance and support for weird old systems. The majority of global programming community often chases shiny new trends in their personal tinkering.

However I think there's the retro-computing, and other hobby niches that align with your hacker view. And certainly there's a bunch of corp enthusiasm for standardizing shiny things.

I think you both are partially right. In fact, the friction I see are where the industry relies on the open-source community for maintenance but then pushes through certain changes they think they need, even if this alienates part of the community.
I don’t know that that is fair.

A number of years ago I worked on a POWER9 GPU cluster. This was quite painful - Python had started moving to use wheels and so most projects had started to build these automatically in CI pipelines but pretty much none of these even supported ARM let alone POWER9 architecture. So you were on your own for pretty much anything that wasn’t Numpy. The reason for this of course is just that there was little demand and as a result even fewer people willing to support it.

Not just little demand, also expensive and uncommon hardware. If the maintainers don't have the hardware to test on they can't guarantee support for that hardware. Not having hardware available often happens because there's little demand for it, but the difficulty of maintaining software for rare hardware further reduces the demand for that hardware.
At least it's been fine for four years of research software on a POWER9 cluster I support (with nodes like the Summit system's).
You nailed it. I am in the process in my spare time to maintain old Win32 apps, that corporates and always-the-latest-and-greatest crowd has abandoned.

Most people don't care about our history, only what is shiny.

It is sad!

You don't think the movement to rust is driven by hackers?
Rust is by no means allowed in the core yet, only as drivers. So far, there are only a few drivers. Currently, only the Nova driver, Google's Binder IPC and the (out of tree) Apple drivers are of practical relevance.
As a fan of Algol 68, I'm pretty excited for this.

For people who aren't familiar with the language, pretty much all modern languages are descended from Algol 60 or Algol 68. C descends from Algol 60, so pretty much every popular modern language derives from Algol in some way [1].

[1] https://ballingt.com/assets/prog_lang_poster.png

Yes, massively influential, but was it ever used or popular?, I always think of it as sort of the poster child for the danger of "design by committee".

Sure it's ideas spawned many of today's languages, But wasn't that because at the time nobody could afford to actually implement the spec. So we ended up with a ton of "algols buts" (like algol but can actually be implemented and runs on real hardware).

Yes, for example UK navy had a system developed in Algol 68 subset.

https://academic.oup.com/comjnl/article-abstract/22/2/114/42...

Used extensively on Burroughs mainframes.
Wow, The Burroughs large system had special instructions explicitly for efficient algol use. You could almost say it was algol hardware. but algol 60 not 68.

https://en.wikipedia.org/wiki/Burroughs_Large_Systems

There is a large system emulator that runs in a browser, I did not get any algol written but I did have way to much fun going through the boot sequence.

https://www.phkimpel.us/B5500/webUI/B5500Console.html

Burroughs used an Algol60 derivative (not '68)
ESPOL initially, which evolved into NEWP.
I would argue C comes from Algol68 (structs, unions, pointers, a full type system etc, no call by name) rather than Algol60
C had 3 major sources, B (derived from BCPL, which had been derived from CPL, which had been derived from ALGOL 60), IBM PL/I and ALGOL 68.

Structs come from PL/I, not from ALGOL 68, together with the postfix operators "." and "->". The term "pointer" also comes from PL/I, the corresponding term in ALGOL 68 was "reference". The prefix operator "*" is a mistake peculiar to C, acknowledged later by the C language designers, it should have been a postfix operator, like in Euler and Pascal.

Examples of things that come from ALGOL 68 are unions (unfortunately C unions lack most useful features of the ALGOL 68 unions. which are implicitly tagged unions) and the combined operation-assignment operators, e.g. "+=" or "*=".

The Bourne shell scripting language, inherited by ksh, bash, zsh etc., also has many features taken from ALGOL 68.

The explicit "malloc" and "free" also come from PL/I. ALGOL 68 is normally implemented with a garbage collector.

C originally had =+ and =- (upto and including Unix V6) - they were ambiguous (a=-b means a= -b? or a = a-b?) and replaced by +=/-=

The original structs were pretty bad too - field names had their own address space and could sort of be used with any pointer which sort of allowed you to make tacky unions) we didn't get a real type system until the late 80s

ALGOL 68 had "=" for equality and ":=" for assignment, like ALGOL 60.

Therefore the operation with assignment operators were like "+:=".

The initial syntax of C was indeed weird and it was caused by the way how their original parser in their first C compiler happened to be written and rewritten, the later form of the assignment operators was closer to their source from ALGOL 68.

Yeah if you ever wondered why the fields in a lot of Posix APIs have names with prefixes like tm_sec and tm_usec it's because of this misfeature of early C.
> it should have been a postfix operator, like in Euler and Pascal.

I never liked Pascal style Pointer^. As the postfix starts to get visually cumbersome with more than one layer of Indirection^^. Especially when combined with other postfix Operators^^.AndMethods. Or even just Operator^ := Assignment.

I also think it's the natural inverse of the "address-of" prefix operator. So we have "take the address of this value" and "look through the address to retreive the value."

The "natural inverse" relationship between "address-of" and indirect addressing is only partial.

You can apply the "*" operator as many times you want, but applying "address-of" twice is meaningless.

Moreover, in complex expressions it is common to mix the indirection operator with array indexing and with structure member selection, and all these 3 postfix operators can appear an unlimited number of times in an expression.

Writing such addressing expressions in C is extremely cumbersome, because they require a great number of parentheses levels and it is still difficult to see which is the order in which they are applied.

With a postfix indirection operator no parentheses are needed and all addressing operators are executed in the order in which they are written.

So it is beyond reasonable doubt that a prefix "*" is a mistake.

The only reason why they have chosen "*" as prefix in C, which they later regretted, was because it seemed easier to define the expressions "*++p" and "*p++" to have the desired order of evaluation.

There is no other use case where a prefix "*" simplifies anything and for the postfix and prefix increment and decrement it would have been possible to find other ways to avoid parentheses and even if they were used with parentheses that would still have been simpler than when you have to mix "*" with array indexing and with structure member selection. Moreover, the use of "++" and "--" with pointers was only a workaround for a dumb compiler, which could not determine by itself whether it should access an array using indices or pointers. Normally there should be no need to expose such an implementation detail in a high-level language, the compiler should choose the addressing modes that are optimal for the target CPU, not the programmer. On some CPUs, including the Intel/AMD CPUs, accessing arrays by incrementing pointers, like in the old C programs, is usually worse than accessing the arrays through indices (because on such CPUs the loop counter can be reused as an index register, regardless of the order in which the array is accessed, including for accessing multiple arrays, avoiding the use of extra registers and reducing the number of executed instructions).

With a postfix "*", the operator "->" would have been superfluous. It has been added to C only to avoid some of the most frequent cases when a prefix "*" leads to ugly syntax.

A dash instead of a dot would be so much more congruent with the way Latin script generally render compounded terms. And a reference/pointer (or even pin for short) is really nothing that much different compared to any other function/operator/method.

some·object-pin-pin-pin-transform is not harder to parse nor to interpret as human than (***some_object)->transform().

C's «static» and «auto» also come from PL/I. Even though «auto» has never been used in C, it has found its place in C++.

C also had a reserved keyword, «entry», which had never been used before eventually being relinquished from its keyword status when the standardisation of C began.

C23 also has reused auto as C++, although type inference is more limited.
That is indeed correct. Kernighan in his original book on C cited Algol 68 as a major influence.
> I'm pretty excited for this

Aside from historical interest, why are you excited for it?

Personally, I think the whole C tangent was a misstep and would love to see Algo 68 turn into Algo 26 or 27. I sort of like C and C++ and many other languages which came, but they have issues. I think Algo 68 could develop into something better than C++, it has some of the pieces already in place.

Admittedly, every language I really enjoy and get along with is one of those languages that produced little compared to the likes of C (APL, Tcl/Tk, Forth), and as a hobbyist I have no real stake in the game.

I wonder about what you think is wrong with C? C is essentially a much simplified subset of ALGOL68. So what is missing in C?
Proper strings and arrays for starters, instead of being pointers that the programmer is responsible for doing length housekeeping.
I think what C is missing is everything that people fall back onto clever use of pointers and macros to implement. Not that I think C should have all those things, Zig does a decent job of showing alternatives.
Whilst I think that C has its place, my personal choice of Algol 26 or 27 would be CLU – a highly influential, yet little known and underrated Algol inspired language. CLU is also very approachable and pretty compact.
Consider exploring Ada 2022 as a capable successor to Algol. Its well supported in GCC and scales well from very small to very large projects. Some information is at https://learn.adacore.com/ and https://alire.ada.dev/
Is like to order a complementary question to the sibling one. What are you going to add to (/remove from?) Algol 68 to get Algol 26?
That task would be beyond my skills, as I said, I am just a hobbyist. I think it would be interesting to see what would result from going back to one of those early foundational languages and developing a modern language from it. With a language like Algol we don't have the decades of evolution (baggage) which are a big part of languages like C and C++ and trickle into the languages they inspired even if they are trying to remove that baggage. So, what would we get if we went back to the start and built a modern language off of Algol? What would that look like?
Wouldn't that be some form of Pascal?
I've actually been toying with writing an Algol 68 compiler myself for a while.

While I doubt I'll do any major development in it, I'll definitely have a play with it, just to revisit old memories and remind myself of its many innovations.

If PL/I was like a C++ of the time, Algol-68 was probably comparable to a Scala of the time. A number of mind-boggling ideas (for the time), complexity, an array of kitchen sinks.
It certainly has quite a reputation, but I suspect it has more to do with dense formalism that was quite unlike everything else. The language itself is actually surprisingly nice for its time, very orthogonal and composable.
Finally.
I find this great, finally an easy way to play with ALGOL 68, beyond the few systems that made use of it, like the UK Navy project at the time.

Ironically, Algol 68 and Modula-2 are getting more contributions than Go, on GCC frontends, which seems stuck in version 1.18, in a situation similar to gcj.

Either way, today is for Algol's celebration.

This makes me worry for the GCC implementation of Rust. People do not seem to use or upkeep the GCC versions of languages who primary Open Source implementations are elsewhere.
There is the advantage that GCC will be only way for Rust to be available in some targets where LLVM isn't an option.

Regarding Go, gccgo was a way to have a better compiler backend for those that care about optimizations that reference Go compiler isn't capable of, due to the difference in history, philosophy, whatever.

Apparently that effort isn't seen as worthwile by the community.

They can just fork off the Golang frontend and it would be the same, maybe patch the runtime a bit.
Being an old dog, as I mention elsewhere, I see a pattern with gcj.

GCC has some rules to add, and keep frontends on the main compiler, instead of additional branches, e.g. GNU Pascal never got added.

So if there is no value with maintenance effort, the GCC steering will eventually discuss this.

Does gcc even support go?
Until a few years ago, gccgo was well maintained and trailed the main Go compiler by 1 or 2 releases, depending on how the release schedules aligned. Having a second compiler was considered an important feature. Currently, the latest supported Go version is 1.18, but without Generics support. I don't know if it's a coincidence, but porting Generics to gccgo may have been a hurdle that broke the cadence.
The best thing about gccgo is that it is not burdened with the weirdness of golang's calling convention, so the FFI overhead is basically the same as calling an extern function from C/C++. Take a look at [0] and see how bad golang's cgo calling latency compare to C. gccgo is not listed there but from my own testing it's the same as C/C++.

[0]: https://github.com/dyu/ffi-overhead

> The best thing about gccgo is that it is not burdened with the weirdness of golang's calling convention

Interesting. I saw go breaking from the c abi as the primary reason to use it; otherwise you might as well use java or rust.

Isn't that horribly out of date? More recent benchmarks elsewhere performed after some Go improvements show Go's C FFI having drastically lower overheard, by at least an order of magnitude, IIUC.
Seems doubtful, given that generics and the gccgo compiler were both spearheaded by Ian Lance Taylor, it seems more likely to me that him leaving google would be a more likely suspect, but I don't track go.
This has been stagnant long before he left.
Yes, though language support runs behind the main Go compiler. https://go.dev/doc/install/gccgo
Where might one look to find examples of such code? I've never found algol outside of wikipedia
https://rosettacode.org/wiki/Category:ALGOL_68

https://github.com/search?q=algol68&type=repositories

Without knowing what your interests/motivations and backgrounds are, it is hard to make good recommendations, but if you didn't know about rosettacode or github I figured I should start with that

What I'm taking away from this is that there's absolutely zero code of interest that is Algol 68
Interests vary!

Just because you can’t find something interesting doesn’t mean it isn’t interesting.

That lesson once learned pays dividends

You can find some modern Algol 68 code, using the modern stropping which is the default in GCC, at https://git.sr.ht/~jemarch/godcc

Godcc is a command-line interface for Compiler Explorer written in Algol 68.

Old papers and computer manuals from the 1960's.

Many have been digitalized throughout the years across Bitsavers, ACM/SIGPLAN, IEEE, or university departments.

Also heavily influenced languages like ESPOL, NEWP, PL/I and its variants.

Will it compile Knuth’s test? https://en.wikipedia.org/wiki/Man_or_boy_test
That test is short enough to just paste it in here:

    begin
      real procedure A(k, x1, x2, x3, x4, x5);
      value k; integer k;
      real x1, x2, x3, x4, x5;
      begin
        real procedure B;
        begin k := k - 1;
              B := A := A(k, B, x1, x2, x3, x4)
        end;
        if k ≤ 0 then A := x4 + x5 else B
      end;
      outreal(1, A(10, 1, -1, -1, 1, 0))
    end
The whole "return by assigning to the function name" is one of my least favorite features of Pascal, which I suppose got it from Algol 60. Where I'm confused though is, what is the initial value of B in the call to A(k, B, x1, x2, x3, x4)? I'm guessing the pass-by-name semantics are coming into play, but I still can't figure out how to untie this knot.
Yeah that's one of the things the test was designed to catch: at that point, B is a reference, to the B that is being defined. Here's a C++ translation from https://oeis.org/A132343 that uses identity functions to make the types consistent:

    #include <functional>
    #include <iostream>
    using cf = std::function<int()>;
    int A(int k, cf x1, cf x2, cf x3, cf x4, cf x5)
    {
        int Aval;
        cf B = [&]()
        {
            int Bval;
            --k;
            Bval = Aval = A(k, B, x1, x2, x3, x4);
            return Bval;
        };
        if (k <= 0) Aval = x4() + x5(); else B();
        return Aval;
    }
    cf I(int n) { return [=](){ return n; }; }
    int main()
    {
        for (int n=0; n<10; ++n)
            std::cout << A(n, I(1), I(-1), I(-1), I(1), I(0)) << ", ";
        std::cout << std::endl;
    }
So in the expression `A(k, B, x1, x2, x3, x4)`, the `B` there is not called, it simply refers to the local variable `B` (inside the function `A`), that was captured by the lambda (by reference): the same B variable that is currently being assigned.
Thanks, that's a bit easier to trace: I think what broke my brain initially is that the x1-x5 parameters were declared as real, when they're apparently nullary functions returning a real. Brings to mind CAFs in Haskell. And all that that in 1960 when most things had less CPU power than the chip in my credit card.
No, because Knuth’s test was for Algol 60 and Algol 68 is a very different programming language.
ALGO 68 (dc) was the go to language for Burrough's [6-8]x00 variants.

These were fairly popular for awhile and supported advanced features like multiprocessing. The demand for exercising the full range of capabilities was kind of niche but an "amateur", like myself, could make a few bucks if you knew ALGOL.

I used to have the grey manual for the Burrough's variant - I'll have to poke around to see if it's in the attic somewhere.

any algol tutorial recommendations? just to feel what's it all about
I would recommend the Informal Introduction to Algol 68, available in PDF at https://algol68-lang.org/resources
Not relevant to GCC, but one use for an old A68 compiler was apparently to be adapted for the old NA Software Fortran 90 compiler, I was told by a former colleague. I'd have expected Ada to be a closer fit, and I don't know how well the decision worked out.
GCC Gnat frontend is used for modern Ada development these days. Not sure if that’s what you mean
This is great news for GCC! I love how this decision supports older languages like Algol 68, keeping them alive in the FOSS world. It shows the hacker community's dedication to preserving diverse tools.
It is awesome.

That said, it really stands out to me that the two latest GCC languages are Cobol and Algol68 while LLVM gets Swift and Zig.

And Rust and Julia come from LLVM as well of course.

Wow that is cool. Pass by name. I always wanted to try it.
Algol60 had call by name, Algol68 doesn't really, it does have "proceduring" which creates a function to call when you pass an expression to a parameter that's a function pointer that has no parameters, you can use that to sort of do something like call by name but the expense is more obvious
Just pass a string and `eval` it.
Does GNU Algol 68 use a garbage collector?

This item has no comments currently.